Skip to Content

Towards Petascale High Fidelity MHD Simulation of Galaxy Cluster Formation

Thomas Jones, University of Minnesota

Usage Details

Peter Mendygral, Thomas Jones, Christopher Nolting, Brian O'Neill, Julius Donnert

Galaxy clusters are knots in "the cosmic web," with masses that can exceed 1015 Msun, and sizes of several million parsecs (Mpc), (1 pc = 3.3 light years). They are the largest and last structures to form by gravitational collapse from fluctuations in the Big Bang, so the final major chapter of the history that made the universe we know today. Their assembly is proceeding even now, as they occasionally collide and merge with other clusters (timescale 109 yr) and as matter rains on them in streams and clumps from intersecting cosmic filaments. Most of the ordinary matter in a cluster is in the form of very hot, tenuous and weakly magnetized plasma. The forces that drive cluster formation also drive shocks and chaotic motions in this plasma that are expressed on a very wide range of length and time scales. Those structures reveal essential information about cluster histories and dynamics, provided they can be modeled accurately. At the same time, these cluster environments are uniquely suited as laboratories for the physics of hot, weakly magnetized plasmas. Such media cannot be produced in terrestrial experiments, but the physics that controls them plays significant, but less apparent, roles in many contexts.

The only way to understand these complex cluster formation events and the underlying physics is through high fidelity computation. The very wide range of scales that must be followed for long periods of time pushes the computational scale to extraordinary size, as well. Computational strategies that adaptively refine grids to high spatial resolution in limited volumes for limited times are not suitable for this problem, because essential, fine grained information spans wide ranges in both space and time, coupling domains that might naively appear nominally independent. Instead, fixed, but nested grid structures that incorporate full cluster volumes on the finest resolution are necessary to approach this problem properly. That is our strategy. Full simulations of the formation of a cluster will require more than 1010 computational zones followed over tens of thousands of time updates. Only systems on the scale of Blue Waters are capable of such simulations.

A successful simulation of this kind must resolve structures reasonably close to real, physical dissipation scales in order to model the relevant physics properly. In a cluster that approaches 1 kpc spatial resolution over volumes that are of order 10 Mpc across. To accomplish that with realistic resources, the code being used must provide very efficient compute and memory performance on individual computation cores, exceptional load balancing, as well as extraordinarily efficient communications among computational cores. We have developed a new-generation magnetohydrodynamics (MHD) code, named WOMBAT that is designed from the ground up to meet the challenges outlined above. The code also includes a new, efficient, highly scalable symplectic, particle-mess N-body routine to model the evolution of dark matter, which dominates the gravitational energy in a cluster. WOMBAT is a hybrid, one-sidedMPI/OpenMP code that incorporates extensive task overlap, designed with the knowledge of coming processor and HPC cluster communication architectures for maximum serial and parallel performance on both current and future generations of computing systems. It utilizes a novel asynchronous communications strategy as well as novel data structures to minimize communication and I/O delays. Blue Waters, because it incorporates very high bandwidth network technology for I/O data communication, is one of the very few existing systems that can take full effect of these advanced design features.

WOMBAT has been designed so that it can utilize a broad variety of MHD solvers, including those of very high order that will allow it to obtain solutions with extraordinary fidelity with a given spatial resolution. Even with all these strengths, given the size of the required simulations (at least 1010 cells on the grid) and the large number of time updates required to complete the solution full solutions will require of order 10 million CPU hours per cluster simulated. In this, initial exploratory effort, we plan several slightly more moderate tuning and demonstration tests as well as a full-scale simulation of a moderately massive cluster that we expect to use approximately 4 million CPU hours on the Blue Waters system. The end products will be uniquely high fidelity MHD simulations that will allow the first meaningful simulation examination of such important questions as how and where cluster magnetic field evolution is controlled and how and where cosmic ray electrons are accelerated within cluster MHD turbulence.