Skip to Content
Petascale Computing Institute
Agenda
Mon Tue Wed Thu Fri
Registration
includes list of Host Sites
Presenters
Resources
Instructions, Slides, Links
Additional Learning Resources
Registrant Locations

FAQ

Call For Host Sites

A/V Plan for Host Sites

Institute Organizers

Monday


Welcome
Welcome by Scott Lathrop, Blue Waters, NCSA
Welcome by Bill Kramer, Blue Waters Director


Keynote by Gordon Bell
Presenter: Gordon Bell
Title: Man v. Machine: The Challenge of Engineering Programs for HPC
Slides
Abstract:
The first era of scientific computing was defined by Seymour Cray computers hosting single memory FORTRAN programs. In 1993, the first multicomputer with a thousand of independent and interconnected computers outperformed mono-memory supercomputers at 60 gigaflops. In 2019, supercomputers have millions of processing elements and operate at hundreds of petaflops. This performance gain of over two million changes not only the nature and scale of the science being simulated or analyzed, but also algorithms, design, and maintenance of the programs


Computing Paradigms
Presenter: Niall Gaffney, TACC
Slides
Abstract:
This will be a high level overview of the various computing paradigms being covered during the institute, including CPUs, GPUs, and the programming models of MPI, OpenMP, Hyrbid, OpenACC, CUDA. The presenters will provide context for the benefits and differences of the various approaches to conducting computational analysis. For example, there will be an overview of GPUs covering:
  • Basic GPU hardware introduction
  • Heterogeneous computing concepts
  • GPU hardware introduction, trends, futures


Resources at ANL
Presenter: David Martin, ANL
Slides
Abstract:
The Argonne Leadership Computing Facility (ALCF) is a national scientific user facility that provides supercomputing resources and expertise to the scientific and engineering community to accelerate the pace of discovery and innovation in a broad range of disciplines. As a key player in the nation's efforts to deliver future exascale computing capabilities, the ALCF is also helping to advance scientific computing through a convergence of simulation, data science, and machine learning methods. ALCF computing resources are significantly more powerful than systems typically used for scientific research and are available to researchers from universities, industry, and government agencies. Through substantial awards of supercomputing time and user support services, the ALCF enables large-scale computing projects aimed at solving some of the world's largest and most complex problems in science and engineering.


OpenMP
Part 1/2
Part 2/2
Presenters: Yun (Helen) He, NERSC and Oscar Hernandez, ORNL
Slides
Abstract:
OpenMP is the defacto directive-based parallel programming API to program applications on shared memory and accelerator architectures. It can be used to program types of parallelism including loop worksharing, tasking, offloading, fine-grain SIMD, SPMD, etc. In this session, we will introduce the basic OpenMP program structure, the major components, how threads are created and used to share work in the parallel regions. We will cover the basics of OpenMP data environment and memory model, and concepts in optimizing data locality and thread affinity.
We will also discuss the following topics in more details: Parallelizing regular loops with workshare directives as it is one of the most important usage patterns in OpenMP. We will also cover OpenMP tasks to parallelize irregular computations, the SIMD directive to aid compiler vectorization for loops and the target construct used for offloading computational intensive kernels to accelerators.


MPI
Part 1/5
Part 2/5
Part 3/5
Presenters: Giuseppe Congiu, Yanfei Guo, and Kenneth Raffenetti, Argonne National Laboratory.
Slides and Materials
Abstract:
The Message Passing Interface (MPI) has been the de facto standard for parallel programming for nearly two decades now, and knowledge of MPI is considered a pre-requisite for most people aiming for a career in parallel programming. This is a beginner-level tutorial aimed at introducing parallel programming with MPI. This tutorial will provide an overview of MPI, its offered features, current implementations of MPI, and its suitability for parallel computing environments. Together with a brief overview of MPI and its features the tutorial will also discuss good programming practices and issues to watch out for in MPI programming. Finally, several application case studies, including examples in nuclear physics, combustion, and quantum chemistry applications, and how they use MPI, will be shown.

Tutorial Goals: MPI is widely recognized as the de facto standard for parallel programming. Even though knowledge of MPI is increasingly becoming a prerequisite for researchers and developers involved in parallel programming institutes, including universities, research labs, and the industry, very few institutes offer formal training in MPI. The goal of this tutorial is to educate users with basic programming knowledge in MPI and equip them to the capability to get started with MPI programming. Based on these emerging trends and the associated challenges, the goals of this tutorial are:

Making the attendees familiar with MPI programming and its associated benefits Providing an overview of available MPI implementations and the status of their capabilities with respect to the MPI standard Illustrating MPI usage models from various example application domains including nuclear physics, computational chemistry, and combustion.

Advanced topics may also be covered depending on the interests of the audience. The vast majority of production parallel scientific applications today use MPI and run successfully on the largest systems in the world. At the same time, the MPI standard itself is evolving to address the needs and challenges of future extreme-scale platforms as well as applications. This tutorial may also cover several advanced features of MPI, including new MPI-3 features, that can help users program modern systems effectively. Using code examples based on scenarios found in real applications, we will cover several topics including efficient ways of doing 2D and 3D stencil computation, derived datatypes, one-sided communication, hybrid (MPI + shared memory) programming, topologies and topology mapping, and neighborhood and nonblocking collectives. Attendees will leave the tutorial with an understanding of how to use these advanced features of MPI and guidelines on how they might perform on different platforms and architectures.