# 2015 Blue Waters Symposium: Abstracts

Abstracts are ordered alphabetically by the last name of the project PI.

Molecular dynamics of DNA origami

Project PI: Aleksei Aksimentiev, University of Illinois at Urbana-Champaign

Abstract: The DNA origami method permits folding of long single-stranded DNA into complex three-dimensional structures with sub-nanometer precision. To ensure that such DNA origami objects behave exactly as designed, their structural, mechanical, and kinetic properties must be characterized at both macroscopic and microscopic scales. Our group recently showed that all-atom molecular dynamics simulations can characterized the structural and mechanical properties of DNA origami objects in unprecedented microscopic detail. Using Blue Waters, we have been exploring nanoscale DNA systems for application in nanofluidic electronics, synthetic tissue engineering and drug delivery.  In this talk, I will describe the results of our recent Blue Waters simulations that predictedthe transport properties of DNA origami objects embedded in lipid bilayer membranes, placed on top of a solid-state support or assembled into nanoscale containers.

Mutli-scale Computational Exploration of Two Dimensional Materials in Nanofluidics and DNA sequencing

Project PI: Narayana Aluru, University of Illinois at Urbana-Champaign

Presented by: Yanbin Wu, University of Illinois at Urbana-Champaign

Abstract: We use the power of Blue Waters (BWs) to perform high accuracy and challenging calculations of the electronic structure of water adsorbed on hexagonal Boron Nitride (h-BN). An accurate theoretical determination of the h-BN-water potential energy surface will not only establish the foundations of h-BN-water interactions, but will also pave way for revolutionary advances in BN based applications in energy, medicine and water purification. Our diffusion Monte Carlo (DMC) calculations show a strong h-BN-water interaction, indicating h-BN surface is more hydrophilic than previously believed. Also, using BWs, we find that a single-layer MoS2 is an attractive material for DNA detection with high signal to noise ratio (SNR). For the first time, we introduce the Mechano-Sensitive (MS) channel for DNA sequencing. MS channel is able to produce two distinguishable tension and ionic current signal, making it an attractive pore for DNA sequencing technology.

Virtual Flu: Ongoing Efforts to Study the Electrostatics and Dynamics of the Entire Influenza Virion Coat

Project PI: Rommie E. Amaro, University of California San Diego

Presented by: Jacob D. Durrant, University of California San Diego

Abstract: Influenza infection is routinely responsible for millions of deaths annually, punctuated by catastrophic pandemics roughly every 25 years. While experimental techniques like electron microscopy and tomography have provided many structural-biology insights relevant to drug discovery, they currently lack the atomic resolution required to answer a number of pharmacologically relevant questions, including the impact of whole-virion electrostatics and dynamics on drug affinity, virulence, and the infection process. To address this concern, we have constructed an atomic-resolution model of the entire influenza virion coat. We here describe our ongoing efforts to study this large-scale (210-million-atom) system, as well as the challenges that accompany molecular dynamics simulations on so expansive a scale.

The proposed work moves us a step closer to a full understanding of the influenza infection process, allowing us to explore novel opportunities for drug and vaccine binding sites in silico at resolutions inaccessible to experiment.

Project Title: Examining the Evolving Properties of Dwarf Galaxies Using the Hydrodynamical Adaptive Refinement Tree Code

Project PI: Kenza Arraki, New Mexico State University

Abstract: The ΛCDM cosmological model has been very successful at predicting the large-scale structure of the Universe. However, on the small-scales dark matter only simulations have failed to reproduce the number and structure of satellite and isolated dwarf galaxies. The inclusion of gas and stars in simulations has been found to alleviate the small-scale issues within ΛCDM, including the core-cusp, missing satellites, and too-big-to-fail problems. To address these concerns, we have created state-of-the-art, high-resolution hydrodynamical simulations of galaxy formation created using the ART code. With these simulations we are investigating the structural and kinematic properties of isolated and satellite dwarf galaxies. By understanding how the evolution of isolated and satellite galaxies differ we can shed light on the role of baryons in the evolution of dwarf galaxies.

Investigating heliospheric structure with a multi-fluid model for pickup ions

Project PI: Matthew Bedford, University of Alabama in Huntsville
Abstract: The solar wind-local interstellar medium (SW-LISM) interaction creates at least three distinct regions of

partially ionized plasma: the supersonic SW within the termination shock, the shocked solar wind in the heliosheath, and the LISM.  Charge exchange between a solar wind proton and interstellar neutral atom results in a neutral atom with region-dependent properties and a hot "pickup ion" (PUI) not in thermal equilibrium with the SW.  These PUIs have a significant effect on the structure of the heliosphere.  Our multi-fluid MHD heliospheric model includes three neutral fluids corresponding to charge exchange in each region, one fluid for the ion mixture, and additional equations for PUI pressure and density.  We present results of several ways to partition energy between thermal and pickup ions.

Inter-job interference, network congestion and task mapping on torus networks

Project PI: Abhinav Bhatele

Abstract: Network interference both within and across jobs can impact the overall performance of parallel applications significantly. We have been studying inter-job interference and network congestion on torus and dragonfly networks using tools developed at LLNL. Using Boxfish, we can visualize the placement of different jobs on a 3D torus network and the bytes passing through different links. Boxfish can provide visual cues into how job placement can slow down

certain jobs. Network congestion among the processes of a job can also impact performance. We have developed tools such as Rubik and Chizu to map MPI processes within a job to different nodes within the allocated partition to optimize communication. Finally, we have been using machine learning to identify the root causes of network congestion on Blue Gene/Q. We propose to use similar techniques to make sense of the performance monitoring data generated on Blue Waters using LDMS.

Advances in the Theory of Supernova Explosions

Project PI: Adam Burrows, Princeton University

Abstract: Core-collapse supernovae have challenged theorists and computational science for half a century. Such

explosions are the source of many of the heavy elements in the Universe and the birthplace of neutron stars and stellar-mass black holes. However, determining the mechanism of explosion remains the key goal of theory. Though the synergistic operation of turbulence and neutrino heating seems implicated, and multi-dimensional simulations with some physical fidelity that have provided insight, we have yet to reproduce the phenomenon robustly. In this talk, I will review the goals of supernova theory, the state of the field, and the computationasl challenge ahead.

Mining the evolutionary dynamics of protein loop structure and its role in biological functions

Project PI: Gustavo Caetano-Anollés, University of Illinois at Urbana-Champaign

Abstract: Protein loops, the "nonregular" structures connecting protein secondary structures, are the likely culprits of both the enormous diversity and specificity of molecular functions that exist in the cell. Advances in our understanding of the evolutionary origin of the structure-function relationship of loops could unlock unprecedented possibilities for synthetic biology. Here we dissect the origin of molecular functions using biophysical and evolutionary bioinformatics techniques. A set of candidate protein loops (~20 amino acid residues long) representative of domain structures along a timeline of domain evolution has been chosen as our initial dataset. One microsecond long NPT Molecular Dynamics simulations are performed using NAMD 2.9 with CHARMM36 force field using the Langevin thermostat and the Berendsen barostat. Correlations between biophysical variables and evolutionary age unravel the structure-function connection. Our findings uniquely link the long-term dynamics of evolution (billions of years) to the short–term microsecond dynamics of the mechanics of protein structure.

Effect and Propagation of Silent Data Corruption in HPC Applications

Project PI: Jon Calhoun, University of Illinois at Urbana-Champaign

Abstract: Modern HPC systems are complex due to the sheer number of components that comprise them. With this complexity comes the reality of failures. One particular damaging and little understood type of failure is silent data corruption (SDC). SDC occurs when program state changes without intervention of the application or the system. An understanding of how applications handle state perturbations and how these corrupted values propagate through HPC applications is key to mitigating its effects. In this talk, we present our results from fault injection experiments on an Algebraic Multigrid linear solver. We explore the sparse matrix vector multiply, a fundamental component to AMG and other HPC applications. In addition, we explore the effects of SDC on other applications and HPC computation kernels. Finally, we discuss algorithm level fault tolerance for SDC detection.

Quantum Monte Carlo studies of Dense Hydrogen

Project PI: David Ceperley, University of Illinois at Urbana-Champaign

Abstract: Hydrogen accounts for much of the visible mass in the universe, its properties are important for understanding the giant planets, but experiments under the relevant conditions are challenging. We have developed new QMC methods to provide definitive results for hydrogen under extreme conditions and using them, have observed clear evidence of an "extra" liquid-liquid phase transition. We have performed a "random structure search" to determine the ground-state crystal structures of atomic metallic hydrogen and find that this solid is stable at low temperature but melts at temperatures between 200K-300K at high pressures. We have computed the "Hugoniot" of dense deuterium to compare with experiment. We have done a careful examination of various DFTs in liquid and solid hydrogen and find that certain functionals are much more accurate though they require systematic pressure corrections.

Convergence and reproducibility in molecular dynamics simulations of nucleic acids enabled by Blue Waters

Project PI: Thomas Cheatham, III, University of Utah

Abstract: Using Blue Waters Petascale resources for atomistic molecular dynamics simulations of various nucleic acid systems alone and interacting with various ligands and proteins we have been able to effectively demonstrate "convergence" and "reproducibility" in the results. This is enabled by optimized versions of AMBER on GPU resources and ensemble-based replica-exchange molecular dynamics simulations. Although truly defining "convergence" is elusive, we demonstrate that independent sets of simulations with vastly different initial conditions and even different enhanced sampling methods give conformational ensembles that are essentially indistiguishable from each other. Convergence has been shown for internal DNA helices, RNA tetranucleotides, RNA tetraloops, and other systems. The effective sampling provides an excellent means to validate and assess force fields. Shared data resulting, specifically ensembles of converged RNA simulations, will ultimately be made available using the new data sharing services on Blue Waters.

Ab initio symmetry-adapted no-core shell model for nuclear structure and reaction studies on the Blue Waters system

Project PI: Jerry Draayer, Louisiana State Univerisity
Abstract: A grand challenge in physics is to design a comprehensive theory for modeling nuclear structure and processes using nuclear forces consistent with the underlying theory of Quantum Chromodynamics.  We developed an innovative theoretical framework for first-principle modeling of nuclear structure, dubbed symmetry-adapted no-core shell model (SA-NCSM), and implemented this approach as a hybrid MPI/OpenMP parallel computer code that scales well for hundreds of thousands of processors.  The extremely large memory and computational power of the Blue Waters system allows us to study nuclear isotopes with a precision hitherto inaccessible to theory, thereby addressing some of the most challenging problems in physics today.  Utilizing nearly entire Blue Waters system, we carried out computations of medium-mass isotopes that are important for advancing our understanding of nucleosynthesis. We also explored the properties of $^{12}$C nucleus, including the Hoyle state with a structure that has been debated for the last several decades.

First galaxies and Quasars in the BlueTides simulation

Project PI: Tiziana Di Matteo, Carnegie Mellon University

Abstract: We will descrive the BlueTides cosmological simulation run with the new version of the code MP-Gadget. The simulation aims to understand the formation of the first quasars and galaxies from the smallest to the rarest and most luminous, and these role of these processes in the reionization of the universe. The simulation is being used to make predictions for the largest telescopes currently planned to study the "end of the Dark Ages" epoch in the early universe, when the first galaxies and quasars form and reionization of the universe takes place.  BlueTides has successfully used essentially the entire set of XE6 nodes on the Blue Waters. It follows the evolution of 0.7 trillion particles in a large volume of the universe (600 co-moving Mpc on a side) over the first billion years of the universe's evolution with a dynamic range of 6 (12) orders of magnitude in space (mass). This makes BlueTides by far the largest cosmological hydrodynamic simulation ever run. BlueTides follows not only hydrodynamics, but also includes models for the physics of galaxy formation, such as radiative processes, star and black hole formation and energy release.  We improved the code infrastructure in several ways: memory and threading efficiency and mesh gravity solver.

DNS of liquid droplets in a turbulent flow

Project PI: S. Elghobashi, University of California, Irvine

Presented by: Michele Rosso, Univerisity of California, Irvine

Abstract: The  objective of our research is to study the multi-way interactions between turbulence and vaporizing liquid droplets  via direct numerical simulation (DNS). DNS resolves, in 3D space and time, the freely-moving droplets as well as  all the relevant length- and time-scales of the turbulent motion. Our DNS solves the unsteady three-dimensional Navier-Stokes and continuity equations throughout the whole computational domain, including the interior of the liquid droplets. The droplet surface motion and deformation are captured accurately by using the Level Set method. The pressure and viscosity jump conditions across the interface are accounted for via the Continuum Surface Force approach. We will present results of the first stage of our research which considers the shape change of an initially spherical liquid water  droplet settling in quiescent air. The effects of surface tension are accounted for. We validate our results via comparison with available experimental studies.

Simulating Thermal Transport in Nanostructures from the Ballistic to the Diffusive Regime

Project PI: Elif Ertekin, University of Illinois at Urbana-Champaign

Abstract: With the help of the Blue Waters petascale supercomputer, we have been able to simulate thermal transport in nanoscale materials such as graphene and the newly-synthesized carbon nanothreads.  Our simulations scan across orders of magnitude of length scales from the ballistic towards the diffusive transport regime.  The nature of thermal transport in these low dimensional materials has been the subject of immense research controversy for the last several years.  Several analytical models predict that the thermal conductivity of these materials will diverge without bound with increasing sample size.  However, it is typically very difficult to directly confirm these predictions, as the large phonon mean free paths necessitate the simulation of systems of prohibitively large size.  Our molecular dynamics results on the nature of phonon transport in graphene and in carbon nanothreads (one and two –dimensional materials) reveal a markedly different asymptotic nature of the transport for the two systems.

Sequence Similarity Networks for the Protein "Universe"

Project PI: John A. Gerlt, University of Illinois at Urbana-Champaign

Presented by: Boris Sadkhin, University of Illinois at Urbana-Champaign

Abstract: The Enzyme Function Initiative (EFI) is a Large?Scale Collaborative Project (NIGMS U54GM093342­05) that is developing tools and strategies to enable more accurate predictions of enzymatic activities and physiological functions of unknown/uncharacterized enzymes discovered in genome sequencing projects. The goal of this project is to calculate sequence similarity networks (SSNs) for the complete library of Pfam­defined protein families (>98,000,000 sequences, January 2015). SSNs allow experimentalists to visualize the relationship between protein sequence identity and enzymatic function for a specific protein family (hundreds to thousands of proteins). To date, we have been able to generate 97% of the Pfam (v48) library. We focused our efforts on developing, scaling and more efficiently running our production pipeline, and have developed strategies for effectively dealing with the ever increasing genomic dataset. Our current goal is to begin generating and disseminating complete libraries of SSNs to the biological community for 2015.

Scalable molecular dynamics and Monte Carlo simulations of millions of particles on thousands of GPUs

Project PI: Sharon C. Glotzer

Presented by: Joshua A. Anderson

Abstract: We expand HOOMD-blue from a single GPU molecular dynamics code to a multi-GPU code capable of scaling to thousands of GPUs on Blue Waters. We keep all data GPU resident, auto-tune kernel launch parameters, and reduce communication and kernel launch latency as much as possible. The HOOMD-blue v1.0 release includes all of this functionality in a production ready code. For a future version of HOOMD-blue, we have developed a scalable parallel algorithm for Metropolis Monte Carlo simulations of particles with short-range interactions. Using the same communication routines developed for molecular dynamics combined with checkerboard trial moves, it scales to systems of millions of particles on thousands of GPUs. It supports a variety of shapes (spheres, ellipsoids, convex (sphero)polygons, simple polygons, convex (sphero)polyhedra, and general polyhedra), NVT and NPT ensembles, pressure measurement, free energy calculations, along with all the file I/O and scripting capabilities already present in HOOMD-blue.

Algorithms for Extreme-Scale Systems

Project PI: William Gropp, University of Illinois at Urbana-Champaign

Abstract: The large number of nodes and cores in extreme scale systems requires rethinking all aspects of algorithms, especially for load balancing and for latency hiding.  In this project, I am looking at the use of nonblocking collective routines in Krylov methods, the use of speculation and large memory in graph algorithms, the use of locality-sensitive thread scheduling for better load balancing, and model-guided communication aggregation to reduce overall communication costs.  This talk will discuss some current results and future plans, and possibilities for collaboration in evaluating some of these approaches.

Quantum treatment of protons with the nuclear electronic orbital method.

Project PI: Sharon Hammes-Schiffer, University of Illinois at Urbana-Champaign

Presented by: Kurt Brorsen, University of Illinois at Urbana-Champaign

Abstract: The Born-Oppenheimer (BO) approximation assumes that because the mass of the nuclei is much greater than the mass of the electrons, the electrons move much faster than the nuclei. The BO approximation allows the total wavefunction to be written as a product of electronic and nuclear wavefunctions and the motion of the nuclei to be treated with classical mechanics. The BO approximation is invoked by most standard methods of quantum chemistry. For certain chemical processes, such as proton-coupled electron transport (PCET), the BO approximation breaks down and non-BO effects of the system must be included. The nuclear electronic orbital (NEO) method includes non-BO effects by treating select nuclei quantum mechanically.  Recent advances in the NEO method will be discussed with a focus on achievement enabled by the use of Blue Waters.

Policy responses to climate chage in a dynamic stochastic economy

Project PI: Lars Hansen, University of Chicago

Presented by: Kenneth Judd, Hoover Institution

We evaluate alternative policy responses to climate change in dynamic models that incorporate risks and uncertainties in the economic and the climate systems. Economic risks include business cycle fluctuations. Climate risks include multiple interacting tipping points calibrated to current climate science views. We do not know the true parameters governing these processes. We examine how economic actors can use Bayesian learning of critical parameters (such as climate sensitivity) to incorporate these uncertainties in their decisions. We find that these risks produce results dramatically different from existing deterministic models. The 2015 social cost of carbon is significantly higher, future social cost of carbon varies greatly over the next century, and the range of plausible emission paths is greater than the collection of scenarios typically used in IAM analysis. The code combines Blue Waters with efficient numerical methods for quadrature, approximation, and optimization, and scales linearly on tens of thousands of cores.

Ice and Water on Blue Waters

Project PI: S. Hirata, University of Illinois at Urbana-Champaign

Abstract: Simulations of bulk solids and liquids at an accurate ab initio theoretical level treating all electrons quantum mechanically have long been unthinkable. We present just such simulations for a whole range of structural, dynamical, thermodynamic and response properties of ice (phase Ih), high-pressure ice (phase VIII), dry ice (solid CO2 phase I), high-pressure dry ice (solid CO2 phase III) and liquid water. They are made possible by combining an algorithmic breakthrough (the embedded-fragmentation technique) with the massive computational power of Blue Waters. The calculated properties include structures, equation of state, bulk modulus, thermal expansion, heat capacities, pressure tuning of Fermi resonance, infrared, Raman, and inelastic neutron scattering spectra, solid-solid phase transition, self-diffusion coefficients, Raman noncoincidence, hydrogen-bond lifetime and reorganization in the liquid, some of which accessible only by predictive high-performance computing.

Multiscale Modeling of Bone Fracture and Strength

Project PI(s): Iwona Jasiuk and Seid Koric, University of Illinois at Urbana-Champaign

Presented by: Martin Ostoja-Starzewski

Abstract: We use Blue Waters to run a multiscale model of bone fracture and strength. We model bone as a material with hierarchical structure spanning from nanoscale, sub-microscale, microscale, mesoscale to macroscale levels. We first conduct nanocale simulations using LAMMPS and then model higher scales using a finite element software ABAQUS. Results obtained at each scale provide new knowledge on bone's response to loads. This first experimentally-based multiscale model of bone fracture and strength should have high impact on clinical assessment of bone.  More specifically, such model will serve as a novel tool for a more accurate prediction of osteoporosis, a bone disease characterized by bone's succeptibility to fracture. Currently, bone quality is assessed clinically by measuring bone mineral density while bone's complex hierarchical structure also contributes to bone properties. Blue Waters is crucial for these computations due to the complexity of bone's structure and nonlinearities resulting from progressing damage.

High Accuracy 3D Radiative Transfer in Cloudy Atmospheres

Project PI: Alexandra Jones, University of Illinois at Urbana-Champaign

Abstract: One of the most important roles clouds play in the atmosphere is in redistributing radiative energy from the sun and that is emitted from the Earth and atmosphere. However, radiative transfer in the atmospheric sciences is generally modeled crudely because of the perceived computational expense. A 3D Monte Carlo Radiative Transfer model has been developed for massively parallel use on Blue Waters to quantify the relative importance of radiative heating and cooling in various atmospheric scenarios at higher accuracy than ever before. Such a highly accurate model has the capability to provide to the community high spatial resolution radiative quantities such as volumetric heating rates, boundary fluxes, and top of atmosphere radiances that can act as standards of comparison for the less accurate but less expensive radiative transfer models more commonly employed. Challenges in development have included finding the right balance between dynamic load management and communication latency to achieve good scaling.

Petascale Research in Earthquake System Science on Blue Waters

Project PI: Thomas Jordan, University of Southern California

Presented by: Philip Maechling

Abstract: SCEC's multi-disciplinary research team is using NCSA Blue Waters to develop physics-based

computational models of earthquake processes. During the past year, we integrated more realistic physics into our computational software to model frequency dependent attenuation, small-scale heterogeneities, free-surface topography, and non-linear yielding effects, all of which become increasingly important when simulating high frequency ground motions. We then evaluated these changes by using Blue Waters to perform deterministic ground motion simulations to frequencies of 2Hz and above. We also studied the impact different crustal velocity models, and different attenuation relationships, by simulating multiple well-observed earthquakes up to 1 Hz. We validated our simulations against observed ground motions, and then used our improved computational models to produce the first site-specific 1Hz physics-based probabilistic seismic hazard estimates. These results prepare our SCEC team to calculate the first 1Hz physic-based Los Angeles region seismic hazard model later in 2015.

Exploiting topology-awareness and load balancing on Blue Waters

Project PI: Sanjay Kale, University of Illinois at Urbana-Champaign

Abstract: Blue Waters is multi-PetaFlop production system with large number of cores executing a large set of distinct jobs at any given time. Scalability of a job is impacted by the dynamics of the jobs executing concurrently and the characteristics of the individual job. The impact of multiple jobs can be minimized by identifying the communication requirements of a job and placing the job appropriately in the system. At the same time, task mapping within an application execution can be used to improve scalability. Within a job, load balancing among CPUs and among CPUs and GPUs is a must to scale a large set of applications. This talk provides a brief overview of our ongoing and planned work that will help application teams make use of aforementioned techniques. I will also talk about the related software that will be made available for use of MPI and Charm++ programmers.

Large scale kinetic plasma simulations: bridging the gap between "global" and "local"

Project PI: Homayoun Karimabadi, University of California San Diego

Presented by: Vadim Roytershteyn, University of California San Diego

Abstract: We discuss applications of petascale kinetic plasma simulations conducted on BlueWaters to several problems in space plasma physics. Global hybrid three-dimensional simulations of interaction between solar wind and a magnetosphere reveal intricate coupling between shock physics, magnetic reconnection, and ion-scale microturbulence, as well as the influence of these processes on global dynamics of the magnetosphere. The second problem considered is the dynamics at short-wavelength (sub-proton) scales in collisionless plasma turbulence. Here electron kinetic physics is expected to play an important role. Consequently, the employed fully kinetic simulations are capable of adequately describing such small-scale processes. The results shed light on the mechanisms responsible for turbulence dissipation in collisionless plasmas and have important implications for dynamics of such systems as solar wind and solar corona.

Protein-lipid interactions in influenza viral entry

Project PI: Peter Kasson, University of Virginia

Abstract: Influenza virus depends critically on specific protein-lipid interactions to infect cells.  These interactions and the broader nanoscale mechanisms of viral entry cannot be directly observed with current experimental techniques, so we use molecular dynamics simulations to study them.  However, they pose a severe challenge for molecular simulation, as biological outcomes are sensitive to fine details of the proteins involved, even reduced model systems for the virus are large by atomic-scale standards, and fusion occurs over sub-millisecond to second timescales.  We approach this problem by using ensemble simulations on best-in-class resources.  Using Blue Waters, we have simulated both the membrane bending by fusion assemblies and the association/dissociation dynamics of protein assemblies believed critical for viral fusion.  Ongoing work will predict and experimentally verify perturbations to this system, helping build an understanding of the nanoscale dynamics of influenza infection.

NEMO5 on Blue Waters - A Flexible Package for Nanoelectronics Modeling Problems

Project PI: Gerhard Klimeck, Purdue University

Presented by: Jim Fonseca, Purdue University

Abstract: Semiconductor device size has already reached countable number of atoms in critical dimensions and materials and designs become more dependent on atomic details. NEMO5 is a nanoelectronics modeling package designed for comprehending quantum effects such as tunneling, state quantization, and atomic disorder, which begin to dominate characteristics of these nano-scale devices. Recent research includes 1) International Technology Roadmap for Semiconductors - NEMO5 simulations found important deviations in the characteristics of scaled devices and raise questions about future designs 2) Disorder in Semiconductor Device Leads - Alloy disorder in non ideal device leads can be tuned to optimize heat flow in ITRS devices to avoid Joule heating and recycle heat into usable electric power 3) Quantum Dot Donor Configuration Interaction - Designs of possible 2-qubit gates for quantum computing have been enhanced by determination of charging energies and configurations of the two-electron ground state of a single negative donor.

GPU Accelerated Quantum Chemistry: A New Method to Determine Absorption Spectra

Project PI: Sara Kokkila, Stanford University

Abstract: The ability to accurately and efficiently study the absorption spectra of large chemical systems necessitates the development of new algorithms and the use of different architectures. We have developed a highly parallelizable algorithm in order to study excited state properties with ab initio electronic structure theory. This approach has recently been implemented to take advantage of graphical processing units to further improve efficiency. This method will be applied to characterize the absorption of the proactive yellow protein, one of the primary models for studying photoreceptor proteins.

Industrial Computational Breakthroughs at Blue Waters (Applications, Scalability and Challenges)

Project PI: Seid Koric, University of Illinois at Urbana-Champaign

Abstract: The Private Sector Program (PSP) team at NCSA provides domain science/engineering expertise to our 20-25 private sector partners, primarily from the FORTUNE 500. For years, these companies have been turning to PSP to help them advance the performance, scalability, realism, insight and, ultimately, value-add of their most important high-performance computing applications. The common belief that engineering simulation codes do not scale efficiently in large supercomputers has been proved wrong in 2014 on NCSA¹s Blue Waters, when the PSP  team, led by Dr. Koric, and  working with Rolls-Royce, Procter and Gamble, LSTC,  Ansys Inc., Cray, Barcelona Supercomputing Center (BSC) and IBM-Watson scaled both the commercial and in-house engineering simulation codes and solver libraries to the unprecedented levels proving that Blue Waters can tackle a very wide range of challenging tasks, not only from science, but also from engineering demonstrating the feasibility of efficiently solving extreme size real world multi-physics problems on the peta-scale and potentially exa-scale level, and thus adding tremendous value to future engineering simulation and research.

Numerical simulation of multi-material mixing in an inclined interface Richtmyer-Meshkov instability

Project PI(s): Sanjiva K. Lele, Stanford University

Presented by: Akshay Subramaniam, Stanford University

Abstract: In this work, high fidelity simulations of the Richtmyer-Meshkov instability on an inclined interface between air and SF6 in a shock tube are performed for a Mach 1.5 shock. The simulations attempt to replicate an experiment conducted at the Texas A&M fluid mixing shock tube facility. Simulations of this problem at multiple spatial resolutions have shown that due to early non-linearities in the current configuration, even low order flow statistics are hard to capture at resolutions where the classical RM cases yield good results. Tight coupling between numerics and flow physics and large range of spatial scales make this a challenging problem to simulate numerically. Simulations shown are conducted with an extended version of the MIRANDA solver developed by Cook et. al (2007) which combines high-order compact finite differences with localized non-linear artificial properties for shock and interface capturing. The MIRANDA code has been shown to scale well up to 65; 000 processors by Cook et. al (2005). Some scaling results on Blue Waters will also be shown.

Core-collapse Supernovae through Cosmic Time

Project PI: Eric Lentz, University of Tennessee Knoxville

Abstract: In a core-collapse supernova, the stellar envelope is ejected by tapping the gravitational energy of the collapsed core through neutrino transport and hydrodynamic instabilities. We have successfully computed a 3D model with an explosion for a mid-sized massive star of 15 solar masses and solar composition using our 3D neutrino radiation hydrodynamic code Chimera. Over the course of three years we plan to compute full explosion models including ejecta properties for low-, mid-, and high-mass massive stars at each of three (solar, low, and none) abundances that represent the range of massive stars contributing to the evolutionary history of our Galaxy and others. I will present our first 3D explosion model and recent efforts to understand the impact of spatial resolution on the understanding of supernovae from simulation.

Particle Approaches for Modeling Nonequilibrium Flows using Petascale Computing

Project PI: Deborah Levin, University of Illinois Urbana-Champaign

Abstract: When steep gradients occur in the strong shocks of hypersonic flows the use of continuum formulations such as the Navier-Stokes equations is questionable.  Instead, the Direct Simulation Monte Carlo (DSMC) method provides a numerical solution to the Boltzmann Equation of transport, particularly for modeling thermochemical nonequilibrium shock dominated flows.  In the DSMC method, the flow is modeled by a series of collisions that occur by using simulated particles that each represent a large number of real molecules.  The approach is computationally tractable and strongly suited to parallelization because the particle collisions and movement are decoupled.   Recently we have implemented an approach with uses octree grids to reduce the cell size only in those regions where the mean free path is small, thus reducing the required number of computational particles. The presentation will summarize the computational challenges in terms of our recent results of scalability using the new octree grid approach.

Simulations of Ribosome Biogenesis on the Whole Cell Level

Project PI: Z. Luthey-Schulten, University of Illinois at Urbana-Champaign

Abstract: Recent experiments are revealing more details of fundamental cellular processes involved in protein synthesis and metabolism. However, a dynamical description of these processes at the whole cell level is still missing. With Blue Waters we have been able to develop a kinetic model of ribosome assembly that reproduces in vitro experimental observations. Stochastic simulations with our GPU-accelerated Lattice Microbe software have extended the model to the entire bacterial cell by computationally linking the transcription and translation events with ribosome assembly on biologically relevant time scales.  We will report on the comparisons to in vivo single molecule experiments and outline how Blue Waters will provide us a window into cellular processes from single cells to colonies.

Instrumenting Human Variant Calling Workflow on BlueWaters

Project PI: Liudmila Sergeevna Mainzer, University of Illinois at Urbana-Champaign

Abstract: High throughput Human Variant Calling Workflow on BlueWaters. If whole genome sequencing and analysis become part of the standard of care in many hospitals within the next few years, then human genetic variant calling will need to be performed on hundreds of incoming patients on any given day. At this scale, the standard workflow widely accepted in the research and medical community, will use thousands of nodes at a time and have i/o bottlenecks that could affect performance even on a major cluster like BlueWaters. In this presentation, we will discuss the kinds of computational bottlenecks that can be expected, as well as the tools and methods to overcome them. Specifically, we will cover the bottlenecks associated with the large number of small files created by the workflow, saturated i/o bandwidth for parts of the workflow, and unbalanced data load on the file system.

Protons and Path Integrals: Landmark Simulations of Condensed Phase Charge Transfer

Project PI: Nancy Makri, University of Illinois at Urbana-Champaign

Presented by:Thomas Allen, University of Illinois at Urbana-Champaign

Abstract: Proton transfer reactions represent a major category of chemical processes occurring in condensed phase environments and have important applications to problems in energy storage, materials science, and biological systems. These reactions typically require mixed quantum-classical simulation methods to cope with their high dimensionality and intrinsically quantum mechanical nature, but until recently no computational methods of this type existed which maintained a rigorous connection to the underlying quantum mechanical equations of motion. Now, using Feynman's path integral as a starting point, we have developed the rigorous, accurate, and highly parallel quantum classical path integral (QCPI) technique, which is the first mixed quantum-classical method to retain a rigorous connection to the exact equations of quantum dynamics, while remaining computationally tractable for atomistic calculations. Using Blue Waters resources, we have applied the QCPI approach to simulations of the Azzouz-Borgis phenol-amine system, an important model of proton transfer reactions in solution transfer reactions in solution.

High-Fidelity Computation of Aero Optics

Project PI: Edwin Mathews, University of Illinois at Urbana-Champaign

Abstract: Using wall-modeled large eddy simulation, the flow of air over a turret is computed to examine the effect of optical distortions caused by compressible flow, or aero-optics. This simulation method utilizes LES to solve the spatially filtered fluid dynamics equations in the turbulent wake and outer region of the boundary layer and couples it with a wall-layer model to alleviate the severe grid resolution requirement imposed by the small eddies in the near-wall region. This approach allows the simulation to be performed at the actual experimental Reynolds number of 2,300,000. The computation is one of the largest and most comprehensive aero-optical simulations to date, solving the flow on over 200 million cells. With Blue Waters, hundreds of optical wavefronts covering the entire turret field-of-view can be collected together with the flow data to be used in the design of flow-control and adaptive optic strategies to mitigate aero-optical aberrations.

Distributed State Estimation for Electric Power Systems

Project PI: Dr. Ariana Minot, Harvard University

Abstract: Accurate state estimation is critical for robust and secure operation of the electric power grid under a high penetration of renewable energy. Recently there has been much interest in how to enhance state estimation through advanced sensor technology. Processing the new measurements generated by these sensors introduces additional computational demands, motivating the need for state estimation algorithms that execute in a distributed manner. We explore the use of high-performance computing for improving distributed state estimation algorithms in power systems. Blue Waters allows us to test our algorithms on large-scale, realistic systems. We present a new parallel state estimation algorithm using matrix-splitting techniques that exploits inherent sparse structure in power systems. The algorithm is written in terms of information exchanged between nodes in a graph and implemented in MPI. With Blue Waters, we can distribute our algorithm on thousands of processors, where each processor simulates a control area of the grid.

Petascale Particle-in-Cell Simulation of Kinetic Effects in High Energy Density Plasmas

Project PI:  Warren B. Mori, UCLA

Presented by: Frank S. Tsung, UCLA

Abstract: The UCLA Simulation Group has been on the frontier of high performance kinetic simulation of plasmas since its early days with its founder John Dawson.  Over the subsequent 40+ years, the group has developed a full suite of codes (with various models for fields and particles) to study kinetic effects in plasmas for a variety of applications, including plasma-based accelerators, laser fusion, and basic plasma physics.  In November 2014, simulations performed on Blue Waters was featured on the cover of the Journal Nature.   The results from that article will be presented, and we will also highlight recently published results featuring simulations performed on Blue Waters.

Exploring the earliest galaxies with Blue Waters and the James Webb Space Telescope

Project PI: Brian O'Shea
Abstract: Galaxies are complicated beasts - many physical processes operate simultaneously, and over a huge range of scales in space and time.  As a result, accurately modeling the formation and evolution of galaxies over the lifetime of the universe presents tremendous technical challenges.  In this talk I will describe some of the important unanswered questions regarding galaxy formation, discuss in general terms how we simulate the formation of galaxies on a computer, and present simulations (and accompanying published results) that the Enzo collaboration has recently done on the Blue Waters supercomputer.  I will also discuss our plans to make this data and related data products available via the Blue Waters Data Sharing service and the National Data Service.

Mechanics of Random and Fractal Media

Project PI: Martin Ostoja-Starzewski, University of Illinois at Urbana-Champaign

Abstract: The objective of this research is to introduce spatial material disorder/randomness into realistic large-scale models of elastic-plastic-brittle materials, and to study the growth of multiscale material systems in the presence of evolving and interacting defects/cracks. In general, we have to be able to handle very large domains in two dimensions (2d) and then to extend the simulations to 3d. Given the major computational challenges, this research has been enabled by Blue Waters so as to advance (i) the development of microstructure-informed stochastic finite element methods,(ii)the simulation of morphogenesis of fractal patterns in fracture and damage of elastic-inelastic materials, and (iii) MRI-based modelling of traumatic brain injury. In the case of (i)-(ii), at a subsequent stage, our large-scale simulations will be extended to other scenarios: anisotropic yield criteria, 2d versus 3d, coupled (e.g. thermo-mechanical, magneto-mechanical) fields, etc. Theresearch employs mostly our own computer codes.

Turbulence in Neutrino-Driven Core-Collapse Supernovae

Project PI: Christian Ott, TAPIR, Caltech

Abstract: Core-collapse supernovae from massive stars are among the most energetic events in the universe. They liberate a mass-energy equivalent of ~15% of a solar mass in the collapse of their progenitor star's core. The majority (~99%) of this energy is carried away by neutrinos, while (~1%) is transferred to the kinetic energy of the explosive outflow. In core collapse, a proto-neutron star is formed and a shock wave is launched that soon stalls, turns into an accretion shock and must be revived for a successful supernova to occur. In the neutrino mechanism, the deposition of a small fraction of the energy leaving the star in the form of neutrinos increases the thermal pressure behind the shock, which pushes the shock out and eventually leads to a runaway explosion. I discuss new simulations facilitated by Blue Waters that show that the anisotropic (buoyant) neutrino-driven high-Reynolds-number turbulent convection behind the shock plays an important, previously underappreciated, role in shock revival. I discuss the nature of neutrino-driven turbulence and argue that a sub-grid model is needed, since global 3D core-collapse supernova simulations, even with Blue Waters, severely underresolve neutrino-driven turbulence.

MHD-driven Supernovae in Three Dimensions

Project PI: Christian Ott, TAPIR, Caltech

Presented by: Philipp Moesta, TAPIR, Caltech

Abstract: Core collapse in rapidly rotating, strongly magnetized progenitors may power some of the most energetic supernovae observed and possibly set the stage for a subsequent long gamma-ray burst. I will present results from two sets of new three-dimensional (3D) general-relativistic magnetohydrodynamic (MHD) simulations that we performed on BlueWaters. First, I will discuss how an m=1 kink instability of the ultra-strong toroidal magnetic field affects jet stability and the dynamics, geometry, and nucleosynthetic signature of MHD-driven explosions. In the second part, I will discuss new 3D simulations of MHD turbulence in rapidly rotating protoneutron stars.

Modeling Solar Wind Flow with the Multi-Scale Fluid-Kinetic Simulation Suite

Project PI: N.V. Pogorelov, University of Alabama in Huntsville

Abstract: The solar wind flow colliding with the local interstellar medium involves neutral atoms, and thermal and nonthermal ions. The ion-ion collisions are negligible while ion scattering on magnetic field fluctuations justify the application of MHD equations. Ion-neutral collisions should be treated kinetically. We have implemented this approach in the Multi-Scale Fluid-Kinetic Simulations Suite (MS-FLUKSS). Our time allocation on Blue Waters through the NSF PRAC program and the outstanding support from the Blue Waters project team allowed us to perform breakthrough simulations and interpret a number of spacecraft observations: instabilities and magnetic reconnection near the heliopause, the effect of the heliosphere on the TeV cosmic ray modulation, creation of energetic neutral atoms, and charge exchange modification of the interstellar flow. We present these results together with the code scaling studies.

Charge-exchange Between Monte-Carlo Neutrals and MHD Plasma Applied to the Heliosphere

Project PI: N.V Pogorelov

Presented by: Jacob Heerikhuisen, University of Alabama in Huntsville

Abstract: Many astrophysical plasma environments are in a partially ionized state. The local interstellar medium (LISM) is about 25% ionized and the presence of neutral Hydrogen significantly affects its interaction with the fully ionized solar wind (SW) plasma emanating from the sun. While the effects ion-ion and H-H collisions can be ignored, charge-exchange collisions between protons and H-atoms significantly modify the distributions of both species. Our code couples an MHD description for the plasma to a Monte-Carlo approach for solving the Boltzmann equation for neutral Hydrogen. In this talk we present background material to show how we collect the source terms that represent the energy and momentum exchange due to charge-transfer collision. We also present some of the latest results from our modeling of the SW-LISM interaction.

Evolution of the small galaxy population from high redshift to the present

Project PI: Thomas Quinn, University of Washington

Abstract: Creating robust models of the formation and evolution of galaxies requires the simulation of a cosmologically significant volume with sufficient resolution and subgrid physics to model individual star forming regions within galaxies.  This project is aiming to do this modelling with the specific goal of interpreting Hubble Space Telescope observations of high redshift galaxies.  To do this modelling, we are using the highly scalable N-body/Smooth Particle Hydrodynamics code, ChaNGa, based on the Charm++ runtime system on Blue Waters to perform a simulation of a 25 Mpc cubed volume of the Universe with a physically motivated star formation/supernovae feedback model.  This past year's accomplishments include using a pathfinding simulation at one tenth the needed resolution, which we will use to study overall star formation histories and luminosity functions.  This simulation is also being used to tune the feedback model to accurately model galaxy morphologies over a range of masses.

Collaborative Research: Petascale Design and Management of Satellite Assets to Advance Space Based Earth Science (OCI-1346727)

Project PI: Patrick Reed, Cornell University

Abstract: This project is a multi-institutional collaboration between Cornell University, The Aerospace Corporation, and Princeton University advancing a Petascale planning framework that is broadly applicable across space-based Earth observation systems design. We have made substantial progress towards three transformative contributions: (1) we are the first team to formally link high-resolution astrodynamics design and coordination of space assets with their Earth science impacts within a Petascale "many-objective" global optimization framework, (2) we have successfully completed the largest Monte Carlo simulation experiment for evaluating the required satellite frequencies and coverage to maintain acceptable global forecasts of terrestrial hydrology (especially in poorer countries), and (3) we have   evaluated the limitations and vulnerabilities of the full suite of current satellite precipitation missions including the recently approved Global Precipitation Measurement (GPM) mission. This work illustrates the tradeoffs and consequences of a collapse in the current portfolio of rainfall missions.

Optical Absorption Spectra of In2O3 and Ga2O3 from first-principles calculations

Project PI: Andre Schleife, University of Illinois at Urbana-Champaign

Abstract: Highly transparent and conducting materials are desirable in the context of transparent electronics, photovoltaics, and optoelectronics. Indium oxide (In2O3) and gallium oxide (Ga2O3) exhibit these material properties: their electrical conductivity can be controlled over a large range while exhibiting remarkable  transparency over the visible spectrum. While they have been adopted as transparent conducting oxide layers and as standalone semiconductors, their fundamental optical properties are still poorly understood. High-quality single crystals are recently available but theoretical efforts have been hampered by the complexity of unit cells containing up to 40 atoms. Using state-of-the-art first-principles calculations based on the Bethe-Salpeter approach, we provide the most accurate theoretical optical spectra currently available for these materials. We identify a pronounced influence of excitons near the absorption edge and over a large range of photon energies. Our results advance the understanding of optical anisotropy in Ga2O3 and quantitatively describe the observed shifts with high accuracy.

Atomistic Characterization of the HIV Capsid from Molecular Dynamics Simulations

Project PI: Klaus Schulten, University of Illinois Urbana-Champaign

Presented by: Juan Perilla, University of Illinois Urbana-Champaign

The HIV capsid is large, containing about 1,300 proteins with altogether 4 million atoms. Although the capsid proteins are all identical, they nevertheless arrange themselves into a largely asymmetric structure. The large size and lack of symmetry pose a huge challenge to studying the chemical details of the HIV capsid. Simulations of 64 million atoms for over 1 micro¬ second allow us to conduct a comprehensive study of the physical properties of the entire HIV capsid including electrostatic potential, all¬atom normal modes, as well as the effects of the solvent (ions and water) on the capsid. The results from the simulations reveal critical details of the capsid protein with important implications for assembly, uncoating and nuclear import.

Very high-resolution numerical modeling for climate extremes in Midwest U.S.

Project PI: Ashish Sharma, University of Notre Dame

Abstract: With climate change, the Midwest U.S. region has experienced huge climatic extremes, viz., increase of winter temperature, decrease of lake ice, prolonged growing season, and early arrival of winter/spring seasons. Frequencies of extremes such as flash floods and droughts have doubled compared to the last century. In this study, we explore the sensitivity of very high-resolution mesoscale simulations to hydrometeorological climatic extremes over the Midwest. A series of climate downscaling experiments are conducted using the Weather Research and Forecasting (WRF) model at 4 km horizontal resolution for one decade 1991 to 2000. Results compare simulation outputs with hybrid gridded metrological observational datasets from co-op station records and PRISM model outputs. This study evaluates the strengths and weaknesses of simulations and gridded observations for studying climate extremes.

The Secrets in their landscapes: Elucidating conformational preferences of proteins for design of allosteric modulators

Project PI: Diwakar Shukla, University of Illinois at Urbana-Champaign

Abstract: Mechanistic understanding of the large scale conformational transformations coupled with activation of key signaling proteins such as G-Protein Coupled Receptors (GPCRs) and Kinases are of enormous importance not only for the development of therapeutic drugs for the treatment of diseases but also for the understanding of stress-related (due to climate change) signaling in plants. In principle, the nature of conformational transition could be modeled in silico via atomistic molecular dynamics simulations, although this is very challenging due to the long activation timescales (in milliseconds). We employ a computational paradigm that couples cloud computing, transition pathway techniques and Markov State Model (MSM) based novel sampling algorithms for mapping the conformational landscape of proteins. These computations provide the thermodynamics and kinetics of their activation for the first time, and help identify key structural intermediates and novel allosteric sites for targeted design.

Petascale Computing: Calculating the Impact of a Geomagnetic Storm on Electric Power Grids

Project PI: Jamesina J. Simpson, University of Utah

Abstract: The historical record indicates the possibility of future extreme geomagnetic storms on Earth resulting from coronal mass ejections from the sun.  These geomagnetic storms can create intense electromagnetic fields at the Earth's surface that can disrupt electric power grid operations and even cause blackouts.  Recent measurements indicate that the storm-time ionospheric currents leading to the disturbed surface electromagnetic fields are not simply amplified versions of quiet-time conditions.  Further, electric power grids in varying locations are impacted differently by geomagnetic storms (e.g. "ocean effects," etc.).  As such, the highly complex 3-D ionospheric currents as well as the Earth's topography and lithosphere content must be accounted for in any study and realistic calculation of storm-time surface electromagnetic fields. Using Blue Waters, we are for the first time running highly detailed, global simulations of the Earth-ionosphere waveguide under the effect of a geomagnetic storm.  Disturbed ionospheric currents are modeled in a three-dimensional Maxwell's equations finite-difference time-domain (FDTD) model extending from -400 km to an altitude of 400 km.  Electromagnetic wave propagation and diffusion is calculated on a global scale, and the surface electromagnetic fields are studied in different locations and at different time scales of geomagnetic storms.

Massively Parallel Graph Analytics

Project PI: George M. Slota, Penn State University

Abstract: Real-world graphs, such as those arising from biological systems, social networks, and the topology of the observable web, require special treatment when analyzed using parallel and distributed algorithms. The irregularity, small-world nature, and overall massive scale common to such networks prove a challenge for algorithm designers and analysts that work with such networks. To ensure the performant running of analytic codes, special care needs to be taken to optimize at all levels of parallelization, including at the processor-level, node-level, and system-level. This work first identifies algorithmic structures common to a large number of parallel graph analytics. Using the noted structures, a set of optimizations are developed that for the various levels of optimizations. Accomplishments of this work include a general approach for analyzing multi-billion vertex graphs on a large-scale system, as well as demonstrated improvements over current state-of-the-art using the implemented optimizations.

Ab Initial Models of Solar Activity

Project PI: Robert Stein, Michigan State University

Abstract: Our goal is to understand and model the formation of solar active regions.  Magnetic flux emergence in active regions is the driver of flares and coronal mass ejections that produce the dangerous storms in Earth's space weather.  Our task is to use results of global solar dynamo simulations to determine spatially and temporally evolving bottom boundary conditions for a magneto-convection simulation of the top 15% of the solar convection zone (three fourths of its density scale heights).  The large number of processors available on Blue Waters is essential to complete this task in a reasonable time because of the large dimensions (both physical and in the computational domain) needed to encompass the wide range of significant scales of solar convection and the long time evolution needed to follow the emergence of magnetic flux from large depths.  This year has focused on improving the scalability of our code by implementing decomposition in the vertical in addition to the horizontal directions, correcting bugs, and relaxing, both thermally and dynamically, an initial state for the flux emergence calculations.

Lattice QCD on Blue Waters

Project PI: Robert L Sugar, University of California Santa Barbara

Presented by: Steven Gottlieb, Indiana University

Abstract: We have developed highly optimized code for the study of quantum chromodynamics (QCD) on Blue Waters, and used it to carry out calculations of major importance in high energy and nuclear physics. We used Blue Waters to generate gauge configurations (samples of the QCD vacuum) with both the HISQ and Wilson-Clover actions. For the first time, the up and down quarks in these calculations are as light as in Nature for the HISQ quarks, leading to a major improvement in precision. For the HISQ action, we generated 67 gauge configurations on a 144^3\times288\$ grid with a lattice spacing of 0.045 fm, as well as smaller ones. These are being used to study kaon charm meson decays to unprecedented precision. The Wilson-Clover project explored the isovector meson spectrum utilizing both the GPU and CPU nodes.  For the first time, finite volume techniques have been used to determine hadron resonance parameters in scattering of pion, kaon and eta mesons.

Characterizing Structural Transitions of Membrane Transport Proteins at Atomic Details

Abstract: Membrane transporters actively and selectively pump molecules in or out of the cell. The central role of these proteins in diverse physiological processes such as metabolism and neurotransmission has rendered them among key drug targets. The transport mechanism often involves large-scale conformational changes of the protein coupled to the translocation of transported species such as ions, nutrients, and drugs. However, mechanistic details of transport process at a molecular level remain elusive due to the limitations associated with both experimental and computational techniques. We have developed a novel computational approach to reconstruct the transport cycle using ensemble-based molecular dynamics simulations within an iterative framework involving high-dimensional path-finding algorithms and free energy calculations. Employing Blue Waters resources and using our novel nonequilibrium approach, the complete transport cycle of a membrane transporter is reconstructed at an atomic level and characterized both thermodynamically and kinetically, providing an unprecedented level of detail on transport mechanism.

Overview of XSEDE and Introduction to XSEDE2.0 and Beyond

Project PI: John Towns, University of Illinois at Urbana-Champaign

Abstract:  This presentation will briefly review XSEDE, its past mission and accomplishments, and give insight into the direction and vision for  the second round of XSEDE.

Applicability of geometric optics method for calculations of single-scattering properties of atmospheric ice crystals

Project PI: Junshik Um and Greg M. McFarquhar, University of Illinois at Urbana-Champaign

Abstract: Ice clouds exhibit a cooling impact on Earth's climate by reflecting solar radiation and a warming impact through absorption of infrared radiation and thermal emission towards Earth's surface. Ice clouds consist of nonspherical ice crystals with various shapes and sizes. To determine the influence of ice clouds on Earth's radiation budget and hence climate, knowledge of their single-scattering properties is required. A geometric optics method (GOM) has been widely used to calculate the single-scattering properties of ice crystals. GOM is an asymptotic solution that is appropriate when a particle is much larger than the wavelength of incident light. However, the range of applicability of GOM has not been well established and should be determined through comparison against numerically exact methods. In this study, single-scattering properties of ice crystals are calculated using both GOM and a numerically exact method (i.e., discrete dipole approximation) to determine the range of applicability of GOM.

Frequency Parallelization of the Calculation of the Inverse Dielectric Matrix

Project PI: Derek Vigil-Fowler, University of California Berkeley

Abstract: The frequency-dependent inverse dielectric matrix (FDIDM) in the random phase approximation (RPA) is an important quantity because it gives information about electronic-energy losses, optical absorption in systems without bound excitons, and electronic bandstructures (when used in the GW approximation). It is of increasing interest in the physics and materials science community because of the wealth of information provided and because computational power has grown, so its computation has become possible for many systems. I will present a scheme I have implemented in the BerkeleyGW code for parallelizing the computation of the FDIDM over multiple frequencies. This is in addition to previously present parallelization over bands and reciprocal space. I will discuss issues with matrix inversion that make this parallelization necessary, e.g. parallel matrix inversion does not scale past hundreds of processors, and also the scalability of this scheme. Excellent scaling has been obtained, especially in systems with large unit cells.

Coarse-grained (CG) biomolecular simulations on Blue Waters

Project PI: Gregory Voth, University of Chicago

Presented by: John Grime, University of Chicago

Abstract: Coarse grained (CG) computer simulations allow the study of large-scale (bio)chemical processes that are otherwise prohibitively expensive. CG models are simplified representations that capture the fundamental behaviors of the target molecules, and the use of CG models can access phenomena on time and length scales that are otherwise inaccessible. The use of CG models can potentially introduce significant difficulties for efficient parallel simulations. We describe the algorithmic and software approaches we use to ameliorate these problems, and the application of these techniques to the CG simulation of several target systems of biological interest.

Scalable CyberGIS Analytics for Solving a Complex Spatial Optimization Problem

Project PI: Shaowen Wang, University of Illinois at Urbana-Champaign

Abstract: CyberGIS—geographic information science and systems (GIS) based on advanced cyberinfrastructure (CI)—has emerged as a new-generation GIS with broad and significant societal impacts. It emphasizes the integration of advanced CI, GIS, and spatial analysis and modeling capabilities to enable compute- and data-intensive research and education across a broad range of fields. Scaling cyberGIS analytics to efficiently and effectively harness various high-spectrum computing resources is critical for cyberGIS to integrate geospatial big data from numerous sources and to support computationally intensive spatial analysis and modeling for solving many challenging and important environmental, geospatial, and social scientific problems. This presentation discusses recent progress made on using the Blue Waters and cyberGIS analytics to solve a redistricting problem based on a NP-hard spatial optimization formulation with a configuration of objectives and constraints that consider complex requirements of geographical and political sciences.

Acknowledgements

This material is based on work supported in part by NSF under grant numbers: 0846655, 1047916, and 1429699. Computational experiments used the Blue Waters.

View presentation PDF

Sensitivity of Available Potential Energy to Changes in Buoyancy Forcing in the Southern Ocean

Project PI: Brian White, University of North Carolina at Chapel Hill

Presented by: Varvara Zemskova, University of North Caroline at Chapel Hill

Abstract: The amount of available potential energy in the ocean is an important metric of the mixing that is necessary to sustain vertical exchanges of heat and nutrients. In the Southern Ocean, buoyancy forcing due to atmospheric cooling of the ocean surfaces interacts with mechanical wind forcing. A 3D horizontal convection model of the Southern Ocean is investigated through direct numerical simulation in order to assess the sensitivity of the available potential energy to changes in buoyancy forcing that can be anticipated with changing climate. Horizontal convection models of the ocean are large and require fine resolution for turbulent features and a long time to reach a steady state, such that the resources available on the Blue Waters system are crucial.

Breakthrough Simulation and Visualization of an Extremely Violent Tornado-Producing Thunderstorm

Project PI: Bob Wilhelmson, University of Illinois Urbana-Champaign

Presented by: Leigh Orf, Central Michigan University

Abstract: In this presentation, results of a very high resolution thunderstorm simulation are presented. The simulated supercell thunderstorm produces an EF5 (the strongest) tornado that is on the ground for almost two hours before dissipating. Data are saved every two model seconds, and the tornado is visualized utilizing the VisIt and Vapor visualization software. Animations of the storm during the genesis, maintenance, and decay stages of the tornado are presented in very high fidelity. A feature we call the streamwise vorticity current (SVC) is examined. The SVC, which forms along the leading edge of the storm's cold pool to the northwest of the tornado, serves as an important source of vorticity for the storm's mesocyclone (rotating updraft) and may play a key role in maintaining the unusually strong tornado. In addition to presenting scientific results, we also discuss how we overcame the technical challenges of simulating and visualizing at this scale.

Preparing the PMMstar code to run on Blue Waters' GPU Nodes

Project PI: Paul Woodward, University of Minnesota

Abstract: An implementation on the Kepler GPU of the multifluid advection scheme in our PPMstar code achieves: (1) 1.2 Tflop/s without off-chip memory accesses, (2) off-chip memory bandwidth of 104 GB/sec using 16 threads of control ("warps") per core ("SMX"), and sustained performance of 128 Gflop/s per GPU for 6-D advection in 32-bit precision.  The methods we used to meet the GPU's 32 KB on-chip working data requirement for the entire PPMstar algorithm, 6.9 times more computationally intensive than this advection scheme, while carefully overlapping data fetches with computation, will be described.  These techniques should also work for other codes.  Initial GPU performance results, which are not yet available, are expected by the time of the Blue Waters Symposium, and they will be reported in this talk.

White Dwarf Mergers as Supernova Progenitors

Project PI: Stan Woosley, UC Santa Cruz

Presented by: Maximilian Katz, Stony Brook University

Abstract: The accelerating expansion of the universe was discovered by observing Type Ia supernovae, brilliant bursts of light in the cosmos that briefly rival their host galaxies in intensity. These events are caused by the explosion of a white dwarf star, yet despite the value they hold to the community, uncertainty remains regarding the cause of this explosion. I am investigating the possibility that it is caused by the merger of two white dwarf stars, using the AMR hydrodynamics code CASTRO. This software implements the Euler equations of hydrodynamics, self-gravity and nuclear reactions, and runs at some of the highest spatial resolutions yet achieved. On Blue Waters, I have recently concluded a verification study of the software using several test problems, and have begun investigating the nature of the early phase of this merger process. I will show some of these results, and discuss our performance characteristics on Blue Waters.

Using Petascale Computing Capabilities to Address Climate Change Uncertainties

Project PI: Donald J. Wuebbles, University of Illinois at Urbana-Champaign

Abstract: This collaborative research between the University of Illinois, the National Center for Atmospheric Research, and the University of Maryland is aimed at using the Blue Waters petascale resources to address key uncertainties associated with the numerical modeling of the Earth's climate system and the ability to accurately analyze past and projected future changes in climate. During 2014 several present day (1979-2010) Atmospheric Model Intercomparison Project (AMIP) experiments were conducted using atmospheric model (Community Atmospshere Model, version 5 – CAM5) resolution of 0.25o.  These simulations revealed that the atmosphere/ocean coupling algorithm could be significantly improved by calculating air/sea fluxes on the higher-resolution atmosphere grid. Preliminary results of CAM5 simulations suggest that the global number of tropical storms and hurricanes per year will decrease in a warming climate. However, there are noticeable increases in the intensity of the most intense storms and therefore the frequency of the major hurricanes (category 4 and 5).

Petascale simulation of high Reynolds number turbulence

Project PI: P.K Yeung, Georgia Institute of Technology

Abstract: We study the complexities of turbulent ?uid ?ow at high Reynolds number, where resolution of ?uctuations over a wide range of scales requires sustained Petascale computation. Using 262,144 cores of Blue Waters in a favorable node-topology con?guration, we have performed the ?rst 81923 production simulation of isotropic turbulence. Both the dissipation rate and enstrophy, representing deformation and rotation of local ?uid elements, take values as large as O(105) times the mean, at nearly the same locations in space. Regions of most extreme vorticity are not worm-like as commonly thought. These results have important implications for phenomena such as stretching of ?ame surfaces and preferential concentration of particles of ?nite inertia. An active algorithmic e?ort is under way to improve our ability to study the dispersion of tens of millions of ?uid particles and other entities in a Lagrangian framework. Turbulent mixing at high Reynolds number will also be addressed.