Modeling Flows through Porous Media with a Kinetic Hybrid CPU-GPU Computational Tool
Deborah Levin, University of Illinois at Urbana-Champaign
Usage Details
Revathi Jambunathan, Deborah Levin, Ozgur Tumuklu, Saurabh Sawant, Jonathan Morgan, Nakul NuwalModeling flows through micro-scale porous media with small characteristic lengths renders the flow to be rarefied and causes the continuum assumption to break down even at atmospheric pressures. Such flows can be modeled using Direct Simulation Monte Carlo (DSMC), a probabilistic particle based method that provides a numerical solution to the Boltzmann transport Equation (BE). In DSMC, binary collisions are performed between nearest neighbors of simulated particles, each representing a large number of real atoms or molecules. DSMC has been used to model micro-scale processes such as MEMS thrusters where the length of the flow is of the same order as the mean free path of the molecules. However, to model the highly irregular nature of porous media the use of structured or even unstructured grids is problematic.
We have developed a grid-free multi-GPU MPI-CUDA solver known as Cuda-based Hybrid Approach for Octree Simulations (CHAOS) DSMC to perform large-scale simulations efficiently. The code employs the Morton Z-curve representation for linear octrees to model the multi-scale physics of the flow, Compute Unified Device Architecture (CUDA) to accelerate the code using GPUs, and Message Passing Interface (MPI) to aid in communication of data between CPUs, which, in turn transfer the data to their respective GPU coprocessor. At present, the CHAOS solver can simulate flow through highly irregular porous geometries and efforts are directed to include chemistry and radiation to model the ablation phenomenon in thermal protection systems of reentering spacecraft as well as to model plasma-material interactions.
To date, we have used CHAOS to estimate the heat transfer to a fractal-like aggregate of spherical structures modeled using 2,000 surface elements, 5 million particles, and 0.2 million leaf nodes. For this problem, the CHAOS solver showed a 200 times speed-up using 16 GPUs, when compared to the serial code. Our goal is to simulate chemically reacting flow, consisting of 10 billion particles and 0.2 billion leaf nodes, through a porous microstructure modeled with 2.6 million surface elements. This problem size is at least 1,000 times larger than the previous problem that we have solved. As a result, about 1,500 to 2,000 GPUs would be required to simulate the large-scale DSMC problem since a single GPU has a limited memory of 6 GB. Such a massively parallel simulation can only be performed on a heterogeneous petascale facility such as Blue Waters.