2014 Blue Waters Symposium: Abstracts

Abstracts are ordered alphabetically by the last name of the project PI.

        A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   X   Y   Z

Back to full agenda

Polyamine mediates sequence- and methylation-dependent chromatin compaction

Project PI: Aleksei Aksimentiev, University of Illinois at Urbana-Champaign
Abstract: All cellular activities, including cell development and differentiation, are consequences of gene regulations. While the conventional gene-regulation mechanism relies on specific transcription factors, a new mechanism based on chromatin compaction is gradually becoming established. Specifically, AT-rich and methylation-rich segments of human genome are found to cluster forming compact spatial domains that are less accessible to the RNA polymerase machineries than other parts of the genome. Despite expensive experimental efforts, the underlying principle of such compaction is poorly understood. Using the Blue Waters Sustained Petascale system and state-of-the-art single-molecule experiments, we demonstrated that polyamines, which are abundant in living cells, could control the sequence/methylation-dependent compaction of DNA. Our results suggest that such polyamine-mediated DNA-DNA attractions might govern the regulation of global chromatin compaction in living cells. The novel mechanism revealed with the help of Blue Waters has potential implications for epigenetic regulations and cancer studies.

View presentation PDF  |  View presentation video

Quantum Monte Carlo Calculations of Water-Graphene and Water-h-BN Interfaces

Project PI: Narayana Aluru, University of Illinois at Urbana-Champaign
Presented by: Yanbin Wu, University of Illinois at Urbana-Champaign
Abstract: We use the power of the Blue Waters to perform high accuracy and challenging calculations of the electronic structure of water adsorbed on graphene surfaces. An accurate theoretical determination of the graphene-water potential energy surface will not only establish the foundations of graphene-water interactions, but will also pave way for revolutionary advances in carbon based applications in energy, medicine and water purification. Our diffusion Monte Carlo (DMC) calculations show a strong graphene-water interaction, indicating graphene surface is more hydrophilic than previously believed. The unusually strong interaction can be attributed to weak bonding formed between graphene and water.  Our DMC calculations are of unprecedented accuracy, based on the first principles Hamiltonian, and can provide insight to experimentalists seeking to understand water-graphene interfaces and to theorists improving density functional theory for weakly bound systems.

View presentation PDF  |  View presentation video

Petascale Quantum Simulations of Nano Systems and Biomolecules

Project PI: Jerzy Bernholc, North Carolina state University at Raleigh
Presented by: Emil Briggs, North Carolina state University at Raleigh
Abstract: We describe the optimization and applications of the real-space multigrid (RMG) electronic structure code suite developed at NCSU. RMG has been optimized to run at PFLOP speeds on massively parallel machines, and it reached 1.144 PFLOPS on Blue Waters while using 3,872 XK nodes. This performance was achieved by optimizing communication patterns, overlapping communications and computations and modifying significant portions of the code base to make efficient use of GPU accelerators. A portable library of routines suitable for inclusion in other HPC codes has been developed. The library can be used for grid decomposition, threading, finite differencing, communications and multigrid solutions of common differential equations. RMG has been used to investigate (i) electron transport in DNA and the effects of base-pair matching, solvent and counterions, and (ii) molecular sensors based on carbon nanotubes, including configurations based both on direct attachment (physisorption and chemisorption) and indirect functionalization via covalent and non-covalent linkers.

View presentation PDF  |  View presentation video

Simulation of Contagion on Very Large Social Networks with Blue Waters

Project PI: Keith Bisset, Virginia Tech
Abstract: 
With an increasingly urbanized and mobile population, the likelihood of a worldwide pandemic is increasing. With rising input sizes and strict deadlines for simulation results, e.g., for real-time planning during the outbreak of an epidemic, we must expand the use of high performance computing (HPC) approaches and, in particular, push the boundaries of scalability for this application area. EpiSimdemics simulates diffusion of contagion extremely large and realistic social contact networks: currently of he United States (300 million agents) with the goal of a world-wide network. EpiSimdemics captures dynamics among co-evolving entities. Such applications typically involve large-scale, irregular graph processing, which makes them difficult to scale due to irregular communication, load imbalance, and the evolutionary nature of their workload. We present an implementation of EpiSimdemics in Charm++ running on Blue Waters that enables research by social scientists at unprecedented data and system scales.

View presentation PDF  |  View presentation video

The Influence of Strong Field Spacetime Dynamics and MHD on Circumbinary Disk Physics

Project PI: Manuela Campanelli, Rochester Institute of Technology
Presented by: Scott Noble, Rochester Institute of Technology
Abstract: The observation of supermassive black holes on the verge of merger has numerous exciting consequences for our understanding of galactic evolution, black hole demographics, plasmas in strong-field gravity, and general relativity.  Our project aims to provide the community with predictions of their electromagnetic signatures using realistic initial conditions, enough cells to resolve the essential MHD turbulence that drives accretion, and sufficient duration to understand their secular trends---all of which are now possible because of Blue Waters resources.  Over the past year we have determined the regime where high-order Post-Newtonian terms in our novel, analytic spacetime solution have significant effect on the circumbinary disk's dynamics and predicted luminosity.  We have demonstrated that nearly all the key predictions survive using a less accurate Post-Newtonian model.  In addition, we have begun simulations where the black holes reside on the numerical domain to explore the development and evolution of the mini-disks about each black hole.

View presentation PDF  |  View presentation video

Simulation of the Molecular Machinery for Second Generation Biofuel Production

Project PI: Isaac Cann, University of Illinois at Urbana-Champaign
Presented by: Rafael Bernardi, University of Illinois at Urbana-Champaign
Abstract: Biofuels are a well known alternative to fossil fuels, however the competition with food production raises ethical concerns. The production of so-called second-generation biofuels, made from agricultural waste, is more favorable, but is not yet cost-competitive. Our project wants to learn from bacteria a more cost-competitive strategy. The bacteria employ synergistically several enzymes, docked extra-cellularly on a highly modular and remarkably flexible molecular framework, the cellulosome. Our project employed molecular dynamics simulations, complementing single-molecule experiments of our collaborators, to characterize the protein modules docked together to form cellulosomes. Our experiments measured that the docking complexes are extremely strong. Simulations revealed that trying to pull the complexes apart actually strengthens the complexes before they rupture. The resulting strength is the largest ever seen in macromolecular complexes.  Presently, we carry out 13 million atom simulations of cellulosomes on Blue Waters to further explore properties and technical potential of cellulosomes.

View presentation PDF  |  View presentation video

Hierarchical molecular dynamics sampling for assessing pathways and free energies of RNA catalysis, ligand binding, and conformational change

Project PI: Thomas Cheatham, University of Utah
Abstract: The collaborative team of AMBER developers is focusing on the development and application of methods which hierarchically and tightly couple ensembles of highly GPU optimized molecular dynamics engines to fully map out the conformational, energetic and chemical landscape of biomolecules. This is done not only to assess, validate and improve available biomolecular force fields, but also to provide novel insight into nucleic acid structure and dynamics.  Through applications of the soon-to-be-released AMBER 14 codes on Blue Waters using recently developed enhanced sampling enabling multi-dimensional replica exchange molecular dynamics methods (M-REMD), the team has shown the ability to not only efficiently converge conformational distributions of RNA tetranucleotides, RNA tetraloops, and DNA helical structure, but also shown reproducibility in the results under different initial conditions.  With convergence and reproducibility, we are exploring force field performance and further exploring and optimizing the M-REMD methods with the aim of reaching convergence more quickly/efficiently, in terms of wall clock time.

View presentation PDF  |  View presentation video

The Bacterial Brain: Structure and Function of a Bacterial Chemoreceptor Array

Project PI: Yann Chemla, University of Illinois at Urbana-Champaign
Presented by: Keith Cassidy, University of Illinois at Urbana-Champaign
Abstract: The ability of an organism to sense, interpret, and respond to environmental signals is central to its survival. Chemotaxis is a ubiquitous signaling system by which cells translate environmental chemical information into a motile response. Bacteria, in particular, have evolved protein networks that survey chemicals in the environment and position cells optimally within their habitat. These networks are functionally analogous to the brains of higher organisms: an array of chemoreceptors near the cell surface senses chemical stimuli and transmits highly adaptive signals through an extended, multi- million-atom protein lattice, which evaluates these signals to appropriately affect the cell's swimming behavior. Here, we present an all-atom structure of the intact bacterial chemoreceptor array, based primarily on crystallographic and electron microscopy data. Molecular dynamics simulations on Blue Waters are being used to investigate the dynamical properties of the array and provide insight into the array's amazing information processing and control capabilities.

View presentation PDF  |  View presentation video

New Advances in Cloud Modeling: How 3D Radiation Impacts Cloud Dynamics and Properties

Project PI: Larry Di Girolamo, University of Illinois at Urbana-Champaign
Abstract: Our goal is to improve Numerical Weather Prediction models and Remote Sensing algorithms for Earth's atmosphere through greatly improved treatment of radiative transfer. Our starting point is to produce a highly accurate 3D Monte Carlo radiative transfer (3DMCRT) code that can better handle the atmosphere's complexity. The 3DMCRT code has been built to handle scattering, absorption, and emission by the Earth's atmosphere and surface, includes spectral integration, and outputs radiance, irradiance, and heating rates. Benchmarking results on Blue Waters are impressive. Blue Waters has allowed us to reach lower noise levels than have been achieved previously. This is a direct result of running the highly scalable code on more processing units than has been attempted with comparable codes.  Our first application has been in studying biases in cloud products derived from NASA satellite instruments. Future directions include coupling the radiative transfer model with the Weather Research Forecasting model.

View presentation PDF  |  View presentation video

Petascale Cosmology with Gadget: Modeling the Formation of the First Quasars with Blue Waters

Project PI: Tiziana Di Matteo, Carnegie Mellon University
Presented by: Yu Feng, Carnegie Mellon University
Abstract: Many of the advances in our understanding of cosmic structure have come from direct computer modeling. In cosmology, we need to develop computer simulations that cover this vast dynamic range of spatial and time scales; we also need to include the effect of gravitational fields generated by (dark matter in) superclusters of galaxies on the formation of galaxies, which in turn harbor gas that cools and makes stars and is being funneled into supermassive blackholes that are of the size of the solar system. Computational cosmology, simulating the entire universe, represents perhaps one of most challenging application for the era of petascale computing resources. I will present our recent progress with the cosmological code P-Gadget on a large cosmological, hydrodynamical simulation to understand the formation of the first quasars and galaxies from the smallest to the rarest and most luminous. The simulation will be used to make predictions for what the upcoming James Webb Space Telescope (JWST, successor of Hubble, planning to launch in 2018) will see. The simulation utilizes the full Blue Waters machine to simulate the evolution of 2 x 70403 dark matter and gas particles in a 400 Mpc/h (~2 billion light years) per side box. In addition to implementation of improved physical models of cosmic hydrodynamics and star formation, we enhanced the scalibility of P-Gadget in memory efficiency, threading, and disk input/output to accommodate the simulation at this scale.

View presentation PDF  |  View presentation video

From Binary Systems and Stellar Core Collapse To Gamma-Ray Bursts

Project PI: Peter Diener, Louisiana State University
Presented by: Philipp Moesta, California Institute of Technology
Abstract: I present results of new three-dimensional (3D) general-relativistic magnetohydrodynamic simulations of rapidly rotating strongly magnetized core collapse that we have performed on Blue Waters in the last year. These simulations are the first of their kind and include a microphysical finite-temperature equation of state and a leakage neutrino heating/cooling scheme that captures the overall energetics correctly. Our results show that the 3D dynamics of magnetorotational core-collapse supernovae are fundamentally different from what was anticipated on the basis of previous 2D simulations. A strong bipolar jet that develops in a simulation constrained to 2D is crippled by a spiral instability and fizzles in full 3D. Our analysis suggests that the jet is disrupted by an m = 1 kink instability of the ultra-strong toroidal field near the rotation axis. A completely new flow structure develops as highly magnetized plasma funnels push out the shock in polar regions, creating wide secularly expanding lobes.

View presentation PDF  |  View presentation video

Next-generation ab initio symmetry-adapted no-core shell model and its impact on nucleosynthesis

Project PI: Jerry Draayer, Ohio State University
Presented by: Thomas Dytrych, Lousiana State University
Abstract: We have recently developed a next-generation first-principle ("ab initio") symmetry-adapted no-core shell model (SA-NCSM) [Phys. Rev. Lett. 111 (2013) 252501]. The SA-NCSM capitalizes on exact as well as important approximate symmetries of nuclei; it also holds predictive capability by building upon first principles, or fundamentals of the nucleus constituents.  These theoretical advances coupled with the Blue Waters cutting-edge computational power have opened a new region, the intermediate-mass nuclei from Fluorine to Argon isotopes, for first investigations with ab initio methods, and reliable descriptions for Neon isotopes are now available. Such solutions are feasible due to significant reductions in the symmetry-adapted model space sizes compared to those of equivalent ultra-large spaces of standard no-core shell models. This is essential for further understanding nucleosynthesis, as nuclear energy spectra and reaction rates for many short-lived nuclei involved in nucleosynthesis are not yet accessible by experiment or reliably measured for the astrophysically relevant energy regime.

View presentation PDF  |  View presentation video

The Semileptonic Kaon Decay Form Factor at the Physical Point

Project PI: Aida X El-Khadra, University of Illinois at Urbana-Champaign
Abstract: The MILC collaboration is currently generating an ensemble of configurations on Blue Waters under the USQCD PRAC award. Physical quantities calculated on this ensemble are expected to have very small systematic errors because of its features (light sea-quarks at the physical point, very small lattice spacing, etc..). We analyzed this ensemble for a calculation of the phenomenologically important semileptonic kaon decay form factor. We plan to use this calculation together with our concurrent analysis on other (smaller) ensembles to obtain a result for the form factor with an overall error that approaches the experimental uncertainty.

View presentation video

Epistatic Interactions for Brain eGWAS in Alzheimer's Disease

Project PI: Nilufer Ertekin-Taner, Mayo Clinic in Jacksonville FL
Presented by: Mariet Allen and Curtis S. Younkin, Mayo Clinic in Jacksonville FL
Abstract: Alzheimer's Disease (AD) is likely influenced by the interaction of many genetic and environmental factors; some of which may act by influencing brain gene expression. The aim of this study is to test for pairwise genetic variant interactions (epistasis) that influence brain gene expression levels using existing data from our expression GWAS study of 359 temporal cortex samples (181 AD, 178 Non-AD), 223,632 SNP genotypes and ~24,000 transcripts that were measured using an expression array. The analysis of epistatic effects in large studies, such as ours, requires powerful computational resources and would not be possible without the unique computing capabilities of BlueWaters. The first of three analyses has been completed; for 17,284 array probes detected in >75% of AD samples, we identified epistatic interactions associated with gene expression measures with a p-value <1E-50, for 24 unique genes (25 probes). Analyses of the non-AD and combined (AD+Non-AD) groups will follow shortly.

View presentation PDF  |  View presentation video

Sequence Similarity Networks for the Protein "Universe"

Project PI: John Gerlt, University of Illinois at Urbana-Champaign
Abstract: The Enzyme Function Initiative (EFI) is a Large‑Scale Collaborative Project supported by the National Institutes of General Medical Sciences (U54GM093342-04) that is developing bioinformatic tools to enable prediction the in vitro enzymatic activities and in vivo metabolic functions of unknown (uncharacterized) enzymes discovered in genome sequencing projects.  The goal of this project is to calculate sequence similarity networks (SSNs) for the complete library of curated Pfam protein families in the UniProtKB protein sequence database (>52,000,000 sequences as of March 2014) so that these descriptions of sequence‑function space can be made available to the biological community. To date, our efforts have focused on adapting the scripts developed for user-initiated calculation of SSNs for individual protein families on the Biocluster environment at the Institute for Genomic Biology to large‑scale calculation using the Blue Waters environment.  Our current goal is to optimize the efficiency of the computational pipeline so that the complete library of SSNs can be calculated on a routine basis (at least quarterly) for dissemination to the biological community.

View presentation PDF  |  View presentation video

System Software for Scalable Applications

Project PI: Bill Gropp, University of Illinois at Urbana-Champaign
Presented by: Pavan Balaji, Argonne National Lab
Abstract: The goal of the system software for scalable computing project was to study the performance of low-level communication systems such as MPI in various environments on the BlueWaters system, and propose optimization techniques to address performance shortcomings. Over the past year, we focused on two such areas: (1) MPI communication in multithreaded environments, and (2) MPI communication in irregular communication environments such as Graph algorithms.

View presentation PDF  |  View presentation video

Non-Born-Oppenheimer Effects between Electrons and Protons

Project PI: Sharon Hammes-Schiffer, University of Illinois at Urbana-Champaign
Abstract: The quantum mechanical behavior of nuclei plays an important role in a wide range of chemical and biological processes.  The inclusion of nuclear quantum effects and non-Born-Oppenheimer effects between nuclei and electrons in computer simulations is challenging.  Our group has developed the nuclear-electronic orbital (NEO) method for treating electrons and select nuclei quantum mechanically on the same level using an orbital-based formalism.  The NEO code utilizes a hybrid MPI/OpenMP protocol, but the calculations require a large number of processors and a substantial amount of memory.  The highly parallel computing system on Blue Waters is advantageous for this code because it provides the capability of splitting the large number of calculations and storage requirements over many nodes. We have used Blue Waters to perform NEO calculations on systems in which all electrons and one proton are treated quantum mechanically and have tested approximate methods that enable the study of larger systems.

View presentation PDF  |  View presentation video

Policy Responses to Climate Change in a Dynamic Stohastic Economy

Project PI: Lars Hansen, University of Chicago
Presented by: Kenneth Judd, Hoover Institution
Abstract: We have developed tools for evaluating alternative policy responses to challenges posed by future climate change in models that merge uncertainties related to both economic factors and their interaction with the climate. We have extended past work on computational methods for solving dynamic programming problems, dynamic games, and empirical estimation of economic models. More specifically, we have combined numerical methods for quadrature, approximation, and optimization problems to develop efficient approximations of the Bellman operator for 6-20 dimensional discrete-time dynamic programming. We have also developed methods for computing all Nash equilibria of dynamic games. Our code scales linearly on tens of thousands of cores; in one case, we achieved linear scaling up to 160K cores. One initial substantive result is that the optimal carbon tax is three to four times larger than usually estimated when we incorporate empirically justified specifications for the social desire to reduce risk.

View presentation PDF  |  View presentation video

Predictive Computing of Advanced Materials and Ices

Project PI: So Hirata, University of Illinois at Urbana-Champaign
Abstract: Two breakthroughs in the algorithms ofab initio electronic structure theory developed recently by the PI's group will be deployed on Blue Waters to perform predictively accurate calculations for the optoelectronic properties of large conjugated molecules used in advanced materials and for the structures, spectra, and phase diagram of nature's most important crystals such as ice and dry ice, all from first principles. Theseab initio methods go beyond the usual workhorse of solid-state calculations (density-functional approximations) in the fidelity of simulations that can be achieved and also uses novel stochastic algorithm, embedded-fragment algorithm, or both to realize unprecedented scalability with respect to both system and computer sizes.

View presentation PDF  |  View presentation video

Solving Prediction Problems in Earthquake System Science on Blue Waters 

Project PI: Thomas Jordan, Southern California Earthquake Center
Abstract: A major goal of earthquake system science is to predict the peak shaking at surface sites over forecasting intervals that may range from a few days to many years. The deep uncertainties in these predictions are expressed through two types of probability: an aleatory variability that describes the randomness of the earthquake system, and an epistemic uncertainty that characterizes our lack of knowledge about the system. Standard models use empirical prediction equations that have a high aleatory variability, primarily because they do not model crustal heterogeneities. We show how this variance can be lowered by simulating seismic wave propagation through 3D crustal models derived from waveform tomography. SCEC has developed a software platform, CyberShake, that combines seismic reciprocity with highly optimized anelastic wave propagation codes to reduce the time of simulation-based hazard calculations to manageable levels. CyberShake hazard models for the Los Angeles region, each comprising over 240 million synthetic seismograms, have been computed on Blue Waters. A variance-decomposition analysis indicates that more accurate earthquake simulations have the potential for reducing the aleatory variance of the strong-motion predictions by at least a factor of two, which would lower exceedance probabilities at high hazard levels by an order of magnitude. The practical ramifications of this probability gain for the formulation of risk-reduction strategies are substantial
.

View presentation PDF  |  View presentation video

Major Advance in Understanding of Collisionless Plasmas Enabled through Petascale Kinetic Simulations

Project PI: Homayoun Karimabadi, University of California, San Diego
Presented by: Vadim Roytershteyn, SciberQuest, Inc.
Abstract: Over 90 percent of the visible universe is in form of plasmas. Key processes in plasmas include shocks, turbulence and magnetic reconnection. The multi-scale nature of these processes and the fact that they occur on very large scales makes their studies difficult in the laboratory. As such, simulations, in tandem with in situ measurements in space, have been critical to making progress. To this end, we have been pushing the state of the art in kinetic simulations of collisionless plasmas. In this talk, we highlight our latest progress in the areas of magnetic reconnection, plasma turbulence, and fast shocks as captured in global simulations of the magnetosphere. One of the outcomes of these studies is the dynamic interplay between shock physics, turbulence and reconnection. Examples showing the synergy between these different sub-fields, and how they are linked together will be demonstrated.

View presentation PDF  |  View presentation video

Accelerating Nano-scale Transistor Innovation

Project PI: Gerhard Klimeck, Purdue University
Presented by: Jim Fonseca, Purdue University
Abstract: Transistor size will continue to decrease in the next ten years and this downscaling has reached the range where the number of atoms in critical dimensions is countable and materials and designs become more dependent on atomic details. Quantum effects such as tunneling, state quantization, and atomistic disorder dominate the characteristics of these nano-scale devices. NEMO5 is a nanoelectronics modeling package designed for comprehending the critical multi-scale, multi-physics phenomena through efficient computational approaches. NEMO5 takes advantage of Blue Waters' unique heterogeneous CPU/GPU capability through the use of the MAGMA, cuBLAS, and cuSPARSE libraries. At the heart of NEMO5's quantum transport approach is the non equilibrium Green's function (NEGF) method which is a computational approach to handling quantum transport in nanoelectronic devices. A specific type of NEGF, the Recursive Green's Function (RGF) method has been implemented using the Sancho-Rubio algorithm.

View presentation PDF  |  View presentation video

Simulations of Biological Processes on the Whole Cell Level

Project PI: Zaida Ann Luthey-Schulten, University of Illinois at Urbana-Champaign
Abstract: Recent experiments are revealing details of fundamental cellular processes involved in protein synthesis and metabolism. However, a dynamical description of these processes at the whole cell level is still missing. With Blue Waters we have been able to develop a kinetic model of ribosome biogenesis that reproduces in vitro experimental observations. Stochastic simulations with our GPU-accelerated Lattice Microbe software have extended the model to the entire bacterial cell by computationally linking the transcription and translation events with ribosome assembly on biologically relevant time scales.  We will report on the progress we have achieved to date and outline how Blue Waters will provide us a window into cellular processes from single cells to colonies. These projects are conducted as part of a larger research plan funded by the NSF grant MCB12-44570 and NSF Physics Frontier Center for the Physics of Living Cells (UIUC) under grant PHY-0822613.

View presentation video

Modeling of Peptide-mediated Ribosome Stalling with Antibiotics

Project PI: Alexander Mankin, University of Illinois at Chicago
Presented by: Bo Liu, University of Illinois at Urbana-Champaign
Abstract: Many antibiotic drugs target bacterial ribosomes, which are the ubiquitous machines in bacterial cells responsible for protein synthesis. The power and friendly interface of Blue waters supercomputer provided us unparalleled opportunities to investigate the interaction of antibiotic drugs and the ribosome at atomic detail using Molecular Dynamics (MD) simulations.  Large-scale MD simulations on Blue waters reproducibly showed that an antibiotic drug can disrupt the catalysis center of the ribosome. MD-based Free-energy calculations revealed the molecular mechanism underlying the drug's action. High performance computations on Blue waters have provided detailed explanations as well as independent validations to experimental discoveries.

View presentation PDF  |  View presentation video

RIKEN Miniapps for Computational Science Codesign

Project PI: Naoya Maruyama, RIKEN Advanced Institute for Computational Science
Abstract: Fiber is a suite of miniapps that have been developed at RIKEN AICS and its partners for promoting codesign toward exascale computing. It currently consists of ten miniapps that are selected from various fields of computational science domains, including QCD, climate modeling, molecular dynamics, fluid dynamics, computational biology, etc. Most of the miniapps are derived from the full-scale applications that have been developed for and used on the K computer and other major supercomputers in Japan, and will be used as benchmark software for the Japanese exascale project at RIKEN AICS. Furthermore, our miniapps are generally available as open-source software to promote and simplify multi-organizational collaboration. This talk will present an overview of the current status of the miniapp suite and discuss our future plans.

View presentation PDF  |  View presentation video

Influences of orientation averaging scheme on the scattering properties atmospheric ice crystals: Applications to atmospheric halo formation

Project PI: Greg McFarquhar, University of Illinois at Urbana-Champaign
Presented by: Jun Shik Um, University of Illinois at Urbana-Champaign
Abstract: The optimal orientation averaging scheme (regular lattice grid scheme or quasi Monte Carlo (QMC) method), the minimum number of orientations, and the corresponding computing time required to calculate the average single-scattering properties within predefined accuracy level are determined for four different nonspherical atmospheric ice crystal models using Amsterdam discrete dipole approximation (ADDA). The QMC requires fewer orientations and less computing time than the lattice grid. The minimum number of orientations and the corresponding computing time for scattering calculations decrease with an increase of wavelength, whereas they increase with the particle nonsphericity.Using the ADDA and QMC, how variations in the sizes and aspect ratios (ratio between crystal length and width) of ice crystals affect the size parameter at which haloes first emerge are also investigated. The threshold size at which 22 and 46 degree halos form is determined. It is shown that size and aspect ratio are key factors for the atmospheric halo formation.

View presentation PDF  |  View presentation video

Petascale Particle-in-Cell (PIC) Simulations of Kinetic Effects in Plasmas

Project PI: Warren Mori, University of California, Los Angeles
Presented by: Frank Tsung, University of California, Los Angeles
Abstract: The UCLA Simulation of Plasmas Group have been using a suite of codes (with various models for fields and particles) to study kinetic effects in plasmas for many applications. In the past two years, the Blue Waters supercomputer have allowed us to perform large scale simulations which have provided qualitative and quantitative understandings in many different topics in plasma physics, including plasma based accelerators and inertial fusion. The simulation results have significant impact on current and future experiments in these areas. We will also discuss our development plans for future architectures, including the Intel Phi and GPU's.

View presentation PDF  |  View presentation video

Understanding Galaxy Formation with the help of Peta-scale computing

Project PI: Kentaro Nagamine, University of Nevada-Las Vegas
Presented by: Ludwig Oser, Columbia University
Abstract: We simulate the formation and evolution of galaxies from cosmological initial conditions up to the present day considering the effects of radiative cooling from primordial gas as well as gas enriched with metals, star formation and kinetic feedback from active galactic nuclei, AGB stars and supernovae of type I and II. To obtain a statistical significant sample of galaxies to compare with the results from large observational surveys we need to simulate thousands of individual galaxies. To improve the inherently poor scaling of cosmological simulation codes we divide the spatial domain and run concurrent zoom-in simulations. Furthermore we are in the process of updating our code to make use of the hybrid nature (shared memory parallelism for processes on the same node) of modern super computers.

View presentation PDF  |  View presentation video

Studying the cosmic Dark Ages with petascale supercomputing

Project PI: Brian O'Shea, Michigan State University
Abstract: In this PRAC project, we are investigating the earliest stages of cosmological structure formation – namely, the transition of the universe from a dark, empty place to one filled with stars, galaxies, and the cosmic web.  In investigating the "cosmic Dark Ages," we focus on three specific topics:  the transition between metal-free and metal-enriched star formation, which marks a fundamental milestone in the early epochs of galaxy formation; the evolution of the populations of high-redshift galaxies that will form Milky Way-like objects by the present day; and the reionization of the universe by these populations of galaxies.  All of these problems require simulations with extremely high dynamic range in space and time, complex physics (including radiation transport and non-equilibrium gas chemistry), and large simulation volumes.  We use the Enzo code (enzo-project.org), which has been modified to scale to large core counts on Blue Waters, which is the only machine available where the heavy data and communication needs can be satisfied.

View presentation video

Simulating vesicle fusion on Blue Waters

Project PI: Vijay Pande, Stanford University
Presented by: Morgan Lawrenz, Stanford University
Abstract: We have completed and ongoing research projects that utilize the extensive computing infrastructure available on the Blue Waters to study G-protein coupled receptors (GPCRs), key signaling proteins that are the targets of ~40% of commercially available drugs. Our workflow involves massively parallel biomolecular simulations that are aggregated by a statistical model that maps the connectivity and stability of receptor states. We have completed work on the β2 adrenergic receptor and used our approach to provide the first atomistic description of this receptor's ligand-modulated activation pathways (Kohloff, et al, Nature Chem. 2014). We targeted intermediate states along these pathways with extensive small molecule virtual screens and demonstrated that these intermediates select unique ligand types that would be undiscovered without knowledge of the full activation pathway. These results show that our model of biomolecular simulations gives significant contribution toward understanding of both biological mechanism and drug efficacy for GPCRs.

View presentation PDF  |  View presentation video

Modeling Heliophysics and Astrophysics Phenomena with a Multi-Scale Fluid-Kinetic Simulation Suite

Project PI: Nikolai Pogorelov, University of Alabama in Huntsville
Abstract: Plasma flows in space and astrophysical environments are usually characterized by a substantial disparity of scales, which can only be addressed with numerical simulations based adaptive mesh refinement techniques and efficient dynamic load balancing among computing cores. Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) is a collection of numerical codes developed by our team to solve self-consistently the MHD, gas dynamics Euler, and kinetic Boltzmann equations describing the transport of different component in partially ionized plasmas. This suite is built on the Chombo framework and allows us to perform simulations on both Cartesian and spherical grids. We have implemented a hybrid parallelization strategy and performed simulations with excellent scalability up to 160K cores. We present the results of our newest simulations of the heliopause and its stability. Numerical simulations proved to be useful for the explanation of the Voyager 1 "early" penetration into the local interstellar medium and help constrain its properties.

View presentation PDF  |  View presentation video

Evolution of the small galaxy population from high redshift to the present

Project PI: Thomas Quinn, University of Washington
Abstract: Scaling to the 100s of thousands of cores revealed a couple of serial bottlenecks which had to be addressed. One example is that the load balancing decisions and domain decomposition were made centrally, and hence were slowed by the speed of an individua core. Moving to a hierarchical load balancer and optimizing the domain decomposition allowed us to demonstrate strong scaling from 8,192 to 131,072 cores on Blue Waters. Part of the verification of our subgrid models involved running a "pathfinder" simulation at roughly 1/10th the mass resolution of our flagship simulation. Preliminary analysis of this simulation demonstrates that we can reasonably model the star formation history of the Universe with a cosmological volume up to a redshift of 4.

View presentation PDF  |  View presentation video

Collaborative Research: Petascale Design and Management of Satellite Assets to Advance Space Based Earth Science

Project PI: Patrick Reed, Cornell University
Abstract: This project is a multi-institutional collaboration between Cornell University, The Aerospace Corporation, and Princeton University advancing a Petascale planning framework that is broadly applicable across space-based Earth observation systems design. We have made substantial progress towards three transformative contributions: (1) we are the first team to formally link high-resolution astrodynamics design and coordination of space assets with their Earth science impacts within a Petascale "many-objective" global optimization framework, (2) we have successfully completed the largest Monte Carlo simulation experiment for evaluating the required satellite frequencies and coverage to maintain acceptable global forecasts of terrestrial hydrology (especially in poorer countries), and (3) we are initiating an evaluation of the limitations and vulnerabilities of the full suite of current satellite precipitation missions including the recently approved Global Precipitation Measurement (GPM) mission. This work will illustrate the tradeoffs and consequences of the GPM mission's current design and its recent budget reductions.

The Mechanism of the Sarco/Endoplasmic Reticulum ATP-Driven Calcium Pump

Project PI: Benoit Roux, University of Chicago
Presented by: Avisek Das, University of Chicago
Abstract: Sarco/endoplasmic reticulum Ca2+-ATPase (SERCA) is an integral membrane protein that uses ATP hydrolysis as a source of free energy to pump two calcium ions per ATP molecule from calcium poor cytoplasm of the muscle cell to the calcium rich lumen of the sarcoplasmic reticulum, thereby maintaining a ten thousand fold concentration gradient. Two major outstanding issues are the pathways of the ions to and from the transmembrane binding sites and a detailed understanding of the large scale conformational changes among various functionally relevant states. We hope to shed some light on these important issues by simulating conformational transition pathways between experimentally known stable states. The optimal path is determined by string method with swarms-of-trajectory which involves running thousands of all-atom molecular dynamics trajectories that communicate at a regular interval. Our recent simulations on Blue Waters have revealed unprecedented molecular details of several key steps of the pumping cycle.

View presentation PDF  |  View presentation video

Quantum Electron Dynamics Simulation of Materials on High-Performance Computers

Project PI: Andre Schleife, University of Illinois at Urbana-Champaign
Abstract: While employing the adiabatic Born-Oppenheimer approximation significantly reduces computational effort, many questions in materials science require an explicit description of non-adiabatic electron-ion dynamics. Electronic stopping, i.e. energy transfer of a fast projectile atom to the electronic system of the target, is a notorious example. We recently implemented the real-time time-dependent density functional theory method based on the plane-wave pseudo-potential formalism in the Qbox/qb@ll code and demonstrated that the explicit Runge-Kutta integration scheme is well-suited for highly parallelized supercomputers. Applying the new implementation to systems containing thousands of electrons, we achieved excellent performance and scalability on a large number of nodes on BlueGene based "Sequoia" at LLNL and Cray XE6 based "Blue Waters" at NCSA. We discuss our work on computing electronic stopping for hydrogen projectiles in gold, showing excellent agreement with experimental data. These first-principles dynamics calculations allow us to gain insights into the fundamental physics of electronic stopping.

View presentation PDF  |  View presentation video

The Computational Microscope

Project PI: Klaus Schulten, University of Illinois at Urbana-Champaign
Abstract: Cells are the building blocks of life, yet they are themselves constructed of proteins, nucleic acids, and other molecules, none of which are, in and of themselves, alive. How living things can arise from the "behavior" of molecules, which are simply obeying the laws of physics, is the essential question of modern biology. Molecular dynamics simulations can be used as a "computational microscope", offering the ability to explore living systems at the atomic level, and providing a necessary complement to experimental techniques such as crystallography, NMR, and cryo-electron microscopy.  Now, with the rise of petascale computing, we take a critical step from inanimate towards animate matter by computationally resolving whole cellular organelles in atomic detail.  This lecture will present recent enhancements to the enabling programs NAMD and VMD, and successful research on several large-scale biomolecular systems being studied on Blue Waters, including the HIV capsid and a photosynthetic organelle.

View presentation PDF  |  View presentation video

Active Region Emergence and Dynamo Action

Project PI: Robert Stein, Michigan State University
Abstract: We are performing simulations of solar near surface magneto-convection. We have begun exploring two cases: one, where strong magnetic fields produced by a deeper convective dynamo is advected into the computational domain at large depths and spontaneously forms magnetic loops which emerge as active regions; two, dynamo action near the surface to see what flux concentrations it can produce.

View presentation PDF  |  View presentation video

Lattice QCD on Blue Waters

Project PI: Robert Sugar, University of California, Santa Barbara
Presented by: Steve Gottlieb, Indiana University & Balint Joo, Jefferson Lab
Abstract: We have developed highly optimized code for the study of quantum chromodynamics (QCD) on Blue Waters, and used it to carry out calculations of major importance in high energy and nuclear physics. We used Blue Waters to generate gauge configurations (samples of the QCD vacuum) with both the HISQ and Wilson-Clover actions. For the first time, the up and down quarks in these calculations are as light as in Nature for the HISQ quarks, leading to a major improvement in precision. With the HISQ configurations, we calculated a ratio of decay constants that enables determination of a key CKM matrix element to a precision of 0.2%, and also obtained world leading precision for at least half a dozen additional quantities. The Wilson-Clover project explored the isovector meson spectrum utilizing both the GPU and CPU nodes. On the CPU, a multi-grid solver gave and order of magnitude improvement over previous code.

View presentation PDF  |  View presentation video

Petascale Simulations of Complex Biological Behavior in Fluctuating Environments

Project PI: llias Tagkopoulos, University of California, Davis
Abstract: One of the central challenges in computational biology is the development of predictive multi-scale models that can capture the diverse layers of cellular organization. Even more scare are models that encode biophysical phenomena together with evolutionary forces, in order to provide insight on the effect of adaptation at a systems-level. Over the last five years, our lab has created a multi-scale abstract microbial evolution model that unifies various layers, from diverse molecular species and networks to organism and population-level properties. With the help of Blue Waters and the NCSA team, we are able to scale up to hundreds of thousand cells, an unprecedented scale of simulation that is, however, well short of the billions of cells that are present in a bacterial colony. Here, we present our scalability results, the methods that we employed to achieve them and our current work on a data-driven, genome-scale, population-level model for Escherichia coli.

View presentation PDF

Characterizing Structural Transitions of Membrane Transport Proteins at Atomic Details

Project PI: Emad Tajkhorshid, University of Illinois at Urbana-Champaign
Abstract: Membrane transporters rely on large-scale protein conformational changes to control the accessibility of the substrate between the two sides of the membrane. While computational approaches provide the required temporal and spatial resolutions to describe these changes, these studies have been limited by the long timescales involved in the process. Using Blue Waters, and employing a novel computational approach combining an extensive search along designed system-specific reaction coordinates with non-equilibrium simulations we have performed the most extensive examination of the structural changes in membrane transporters. Through these studies, we have identified the in so far lowest energy path for structural changes in a number of membrane transporters. Combining these reaction paths with multi-dimensional replica exchange umbrella sampling calculations as well as additional free energy perturbation simulations we have been able, for the first time, to provide free energy profiles for the entire transport cycle in a membrane transporter.

View presentation video

Preliminary Evaluation of Abaqus, Fluent and in-house GPU code Performance on Blue Waters

Fluid-Flow and Stress Analysis of Steel Continuous Casting

Project PI: Brian Thomas, University of Illinois at Urbana-Champaign
Abstract: This project advances the state-of-the art in computationally-intensive models of turbulent flow, mechanical behavior, heat transfer, and solidification in the continuous casting of steel.  These models provide practical insight into how to improve this important manufacturing process.  The performance of and preliminary results with the commercial codes FLUENT and ABAQUS and an in-house code are presented.  FLUENT has been tested with a three-dimensional, two-phase turbulent flow simulation, and demonstrates a speedup of about 100 with 256 cores.  ABAQUS/Standard has limited speedup capabilities because of it's direct solver, and works best with about 64 cores; the vast amount of memory on Blue Waters has improved the simulation time of a thermo-mechanical model of the mold and waterbox by a factor of about 40.  A maximum speedup of 25 has been observed on a single Blue Water XK7 node for the in-house GPU code for flow simulations.

View presentation PDF  |  View presentation video

Re-designing Communication and Work Distribution in Scientific Applications for Extreme-scale Heterogeneous Systems

Project PI: Karen Tomko, Ohio State University
Abstract: The fields of computer architecture, interconnection networks and system design are undergoing rapid change enabling very large supercomputers such as Blue Waters to be built. Advances have come in the form of increased parallelism from many-core accelerators and improved communication interfaces. To leverage these advances, applications must use new network capabilities and more sophisticated programming models. In this project we explore communication performance for modern programing models at large scale. Specifically we evaluate the performance of point-to-point and collective communications for CraySHMEM and UPC PGAS models. We tune a hybrid HPL implementation to leverage both CPU and GPU resources for systems with mixed node types. We evaluate collective algorithm performance at very large scale, staring with Broadcast, and extending to more complicated op- erations such as Reduce, All-Reduce and All-to-All. Additionally, we evaluate the cost of I/O for checkpoint operations, to understand the impact of system level checkpointing on applications.

View presentation PDF  |  View presentation video

Petascale Multiscale Simulations of Biomolecular Systems

Project PI: Gregory Voth, University of Chicago
Presented by: John Grime, University of Chicago
Abstract: Computer simulations offer a powerful tool for high-resolution studies of molecular systems. Increases in the potential scope of computer simulations are driven not only by theoretical developments but also by the impressive (and growing) power of modern supercomputers. Very large-scale molecular systems can nonetheless present a serious challenge to simulations at atomic resolutions. In such cases, "coarse-grained" (CG) molecular models can significantly extend the accessible length and time scales via the careful generation of simpler representations. Although CG models are in principle computationally efficient, this advantage may be difficult to realize in practice. Conventional molecular dynamics (MD) software makes certain assumptions about the target system for computational efficiency, and these assumptions may be invalid for dynamic CG models. To address these issues, we developed the UCG-MD software. This presentation will outline key algorithms of the UCG-MD code, and demonstrate their utility for representative CG models of biological systems.

View presentation PDF  |  View presentation video

Computational Exploration of Unconventional Superconductors Using Quantum Monte Carlo

Project PI: Lucas Wagner, University of Illinois at Urbana-Champaign
Abstract: A grand challenge in condensed matter physics is to describe materials with so-called strong correlation, which means that the interaction between the electrons is important enough to make mean-field simulations qualitatively inaccurate. We have been using Blue Waters to perform explicitly many-body quantum simulations of strongly correlated materials. To date, we have investigated vanadium dioxide and the superconducting cuprates. In vanadium dioxide (VO2), there is a metal-insulator transition at approximately 340 Kelvin. This transition can be used for many technological applications such as fast optical switching and other cases in which one wishes to easily switch a material from a metal to insulator. We have elucidated the mechanism of this metal-insulator transition and made a prediction. This prediction was recently observed in neutron experiments at Oak Ridge. The preprint is available as arxiv:1310.1066. In the superconducting cuprates, we have studied the parent material, calculating the important material parameters for the first time from fundamental principles. We have found that there is a strong coupling between spin and lattice degrees of motion, which will help to understand experiments focusing on explaining the mechanism of superconductivity. The preprint is available as  arXiv:1402.4680.

View presentation PDF  |  View presentation video

A Scalable Parallel LSQR Algorithm for Solving Large-Scale Linear System

Project PI: Liqiang Wang, University of Wyoming
Abstract: Least Squares with QR-factorization (LSQR) method is a widely used Krylov subspace algorithm to solve sparse rectangular linear systems for tomographic problems.  Traditional parallel implementations of LSQR have the potential, depending on the non-zero structure of the matrix, to have significant communication cost. The communication cost can dramatically limit the scalability of the algorithm at large core counts.  We propose a scalable parallel LSQR algorithm that utilizes the particular non-zero structure of matrices that occurs in tomographic problems. In particular, we specially treat the kernel component of the matrix, which is relatively dense with a random structure, and the damping component, which is very sparse and highly structured separately. The resulting algorithm has a scalable communication volume with a bounded number of communication neighbors regardless of core count. We present scaling studies from real seismic tomography datasets that illustrate good scalability up to O(10,000) cores on the supercomputers including the Blue Waters, Yellowstone, and Kraken.

View presentation PDF  |  View presentation video

An Extreme-Scale Computational Approach to Redistricting Optimization

Project PI: Shaowen Wang, University of Illinois at Urbana-Champaign
Abstract: The primary goal of our research project is to develop and implement computational capabilities that can generate and objectively evaluate alternative political redistricting schemes of large scale. We established a scalable parallel genetic algorithm (PGA) approach to tackle the NP-hard redistricting problem. The resulting PGA library, implemented using MPI, is scalable to 260K processor cores on BlueWaters with an efficient asynchronous migration strategy that eliminated the communication bottleneck introduced by MPI barrier at the end of each iteration of the evolutionary process. PAPI-based performance profiling on the code shows extreme scale capability on leveraging BlueWaters for the combinatorial operations needed by large-scale problem solving. We have designed and are testing an extension to our approach in order to achieve desirable performance on BlueWaters for the most intensive redistricting problem-solving which features complex spatial genetic algorithm (GA) operators.

View presentation PDF  |  View presentation video

Huge-Scale Molecular Dynamics Simulation of Multi-bubble Nuclei

Project PI: Hiroshi Watanabe, The Institute for Solid State Physics, University of Tokyo
Abstract: We have developed molecular dynamics codes for a short-range interaction potential that adopt both the flat-MPI and MPI/OpenMP hybrid parallelization on the basis of a full domain decomposition strategy. Benchmark simulations involving up to 331.8 billion Lennard-Jones particles were performed on the K-computer, and a performance of 1.76 PFLOPS was achieved, which corresponds to 16.6% execution efficiency. Ostwald ripening of bubbles was studied using the developed code. The simulations involved up to 679 million Lennard-Jones particles. The bubble-size distribution functions were observed with high accuracy and the self-similarity of the distribution functions predicted by the Lifshitz-Slyozov-Wagner theory was directly confirmed. The total number of bubbles decays algebraically and the decay exponent changes as temperature increases. This change of exponent corresponds implies that the type of Ostwald-ripening is changed from reaction-limited to diffusion-limited.

View presentation PDF  |  View presentation video

GeMTC and Swift: Implicitly-parallel functional dataflow for productive hybrid programming on Blue Waters

Project PI: Michael Wilde, University of Chicago
Presented by: Ioan Raicu, Illinois Institute of Technology
Abstract: Important domains driving the requirements for extreme-scale systems, like materials design, biophysical dynamics, and semantic network exploration, are naturally expressable as many small data-driven tasks.  Swift, an implicitly and pervasively parallel dataflow-driven programming language, can elegantly express the massive concurrency demanded by these applications while providing the productivity of a high-level language. GeMTC, a new execution model for accelerators, enables independent fine-grained tasks to be executed on GPUs and other devices.  In this work, we have integrated GeMTC and Swift to provide the advantages of implicitly-parallel functional dataflow programming in a productively transparent manner on extreme-scale hybrid systems. Notably, we use the same programming and execution model for both CPU and accelerator tasks. Our results to date on Blue Water's suggests that this approach scales well to the petascale level, and that accelerator devices can indeed be encompassed as yet another target engine for the execution of fine-grain run-to-completion tasks.

View presentation PDF  |  View presentation video

Numerical Simulations of Supercells and Embedded Tornadoes

Project PI: Robert Wilhelmson, University of Illinois at Urbana-Champaign
Presented by: Leigh Orf, Central Michigan University
Abstract: Utilizing the CM1 model, simulations of supercell thunderstorms were conducted. Simulations ranged in resolution, physics options, forcing, and the environment in which the storm formed. Concurrently, development on I/O and visualization code was completed. Two major objectives were realized: 1. A VisIt plugin was developed that enables researchers to visualize both at full model scale, and also in any arbitrary subdomain. This plugin works with the new CM1 HDF5 output option that was developed and tuned on Blue Waters. 2. A supercell simulation with an embedded long-track EF5 tornado was conducted and visualized with volume rendering techniques utilizing the VisIt plugin. To the best of our knowledge, this simulation is the first of its kind, capturing the genesis, maintenance, and decay of the strongest class of tornado. The simulated tornado traveled for 65 miles and shows striking resemblance to an observed storm that occurred in a similar environment.

View presentation PDF  |  View presentation video

Petascale Simulation of Turbulent Stellar Hydrodynamics

Project PI: Paul Woodward, University of Minnesota
Abstract: We are exploiting the scale and speed of Blue Waters to enable 3-D simulations of brief events in the evolution of stars that can have profound impacts upon their production of heavy elements.  We are focusing on hydrogen ingestion flash events, because these have been identified as potential sites for the origin of observed, strongly anomalous abundance signatures in stars that formed in the early universe.  Hydrogen that is pulled down into a 12C-rich helium-burning convection zone would produce 13C, which is, via the 13C(α,n)16O reaction, a very strong neutron source for the production of heavy elements.  These flash events, as well as many properties of how pre-supernova stars evolve, depend critically upon the process of convective boundary mixing, which demands a 3-D treatment.   In 2014, we simulated H-ingestion in a very late thermal pulse star, Sakurai's object.  For this star, detailed observational data is available to validate our simulation methodology.

View presentation PDF  |  View presentation video

Type Ia Supernovae

Project PI: Stan Woosley, University of California, Santa Cruz
Presented by: Chris Malone, University of California, Santa Cruz
Abstract: We present results of our high-resolution, full-star simulations of flame propagation in Type 1a supernovae using the compressible hydrodynamics code Castro. Using the tremendous resources available at Blue Waters, adaptive mesh refinement allows the initial burning front to be modeled with an unprecedented effective resolution of 36,8643 zones (~136 m/zone; compare to typical 1 km/zone). The initial rise and expansion of the deflagration front are tracked until burning reaches the star's edge (~0.8 seconds). We find that the degree to which pre-existing turbulence affects the burning front decreases with increasing ignition radius since the buoyancy force is stronger at larger radii. Even central ignition --- in the presence of a background convective flow field --- is rapidly carried off-center as the flame is carried by the flow field. Overall, we find that very little mass is burned in our models, resulting in very little expansion of the star; any subsequent any subsequent detonation will therefore produce a super-luminous supernovae.

View presentation PDF  |  View presentation video

Climate Change Research with Blue Waters: Recent Findings and New Directions

Project PI: Donald Wuebbles, University of Illinois at Urbana-Champaign
Abstract: This collaborative research between the University of Illinois, the National Center for Atmospheric Research, and the University of Maryland is using Blue Waters to address key science issues and uncertainties associated with numerical modeling of the Earth's climate system and the ability to accurately analyze past and projected future climate changes. Extensive effort has been put into making the Community Earth System Model (CESM) run accurately and efficiently on Blues Waters. We recently began a major scientific study using CESM at 0.25 degree resolution in the atmosphere with a 1 degree ocean. With this high resolution, the model is being run from 1850 to present and from present to 2100, eventually for several different scenarios of future emissions from human activities. These studies should greatly affect the interpretation of global climate projections. In addition, studies are beginning to examine model uncertainties using the Climate-Weather Research Forecasting model.

View presentation PDF  |  View presentation video

Petascale Computations for Complex Turbulent Flows at High Reynolds Number

Project PI: Pui-Kuen Yeung, Georgia Institute of Technology
Abstract: We study the complexities of turbulent fluid flow at high Reynolds number, where resolution of fluctuations over a wide range of scales requires (even in simplified geometries) massively parallel Petascale computation. The power of Blue Waters, combined with use of remote memory addressing and reserved partitions to minimize communication costs, has made a simulation at record resolution exceeding half a trillion grid points feasible. Early science results include new insights on fine-scale intermittency, spectral transfer, and the connection between extreme events observed in fixed and moving reference frames. Phase 1 of this project has focused on algorithm enhancement and the study of velocity field structure. Phases 2 and 3 will be focused on the mixing of passive substances and slow-diffusing chemical species, and dispersion of contaminant clouds at high Reynolds number. Collaborative work based on the new data is expected to lead to significant advancements in theory and modeling.

View presentation PDF  |  View presentation video

Breakthrough Petascale Quantum Monte Carlo Calculations

Project PI: Shiwei Zhang, College of William and Mary
Presented by: Elif Ertekin, University of Illinois at Urbana-Champaign
Abstract: The central challenge that our PRAC team aims to address is accurate, ab-initio computations of interacting many-body quantum-mechanical systems. The Blue Waters petascale computing framework has enabled highly-ambitious calculations of a diverse set of many-body physics and engineering problems, spanning from studies of model systems in condensed matter (3D Hubbard model), to superconductivity in high-pressure hydrogen, to the simulation of real materials of current research interest for energy applications. Over the past year, we have carried out simulations aiming to address these three general target areas. In this presentation, we will give an overview of the activities of the team and highlight the latest science results. We will focus on several of our explorations, including near-exact Hubbard model calculations, the disso- ciation of the chromium dimer, the adsorption of cobalt atoms on graphene, the metal-insulator transition in vanadium dioxide, and calculations of defects in semiconductors for photovoltaic and other applications.

View presentation PDF  |  View presentation video