Skip to Content
Petascale Computing Institute
Agenda
Mon Tue Wed Thu Fri
Registration
includes list of Host Sites
Presenters
Resources
Instructions, Slides, Links
Additional Learning Resources
Registrant Locations

FAQ

Call For Host Sites

A/V Plan for Host Sites

Institute Organizers

Thursday


Python in HPC
Presenter: Ramses van Zon, SciNet, University of Toronto Slides and Materials
Abstract:
Python has gained tremendous popularity as a scripting language to glue together computationally heavy parts of a workflow or to perform pre- and postprocessing. For better or for worse, it is also increasingly used as a computational environment on its own using one or more of the many diverse open source python packages available. This Python in HPC session is a fair, evidence-based walk-through of techniques to make your Python scripts faster. Using a diffusion code as a working example, it will be shown how to effectively harness multiple threads and processes using the python packages such as numpy, numexpr, multiprocessing, and mpi4py.


Resources at ORNL
Presenter: Bill Renaud, ORNL


Debugging, Profiling, and Optimization
Presenter: Virginia Trueheart, TACC
Abstract:
To truly take advantage of all the benefits of an HPC system, it is important to understand how to properly optimize your code. To properly optimize code, there needs to be an understanding of how compilers, debuggers, and profilers work in conjunction to improve your code's performance. This requires a bit of knowledge of computer science and some knowledge of software engineering. Computer science defines program optimization or software optimization as the process of modifying a software system to make some aspect of it work more efficiently or use fewer resources. Profiling is a software engineering technique used to analyse and measure various aspects of code, memory, processor utilization, function calls, etc. Profiling code leads to better optimised code. It allows a programmer to reorder procedures to improve instruction cache utilization and reduce paging required by the program, thereby improving performance, all the while not affections the semantic behavior of the program. The performance of program code produced when applying profile data depends upon the optimizing translator correctly identifying the most important portions of the program during typical use.

Therefore, it is important to gather profile data while performing the tasks that will be performed by end users, and using input data that is similar to that expected in the environment in which the program will be run. Scientific code can become very complex very quickly, and any programming error might lead to drastically errored results. Debuggers are tools designed to alleviate these errors and therefore trusting the program output. Debuggers allow to interactively monitor a program's behavior during execution - setting up breakpoints or preview current value of variables, to be able to view the program state at any point in the execution process. Debugging tools allow for mature and clean code to be deployed with confidence in the end results. Compilers are the microcosm of computer science. it is the heart of running code for every face. One aspect of programming and developing software stacks is removing the "black box" and understanding the fundamentals of programming. Unravelling the compiler allows for a better understanding of the low level which helps at the high level. Understanding the compiler defines the formalisms behind parsing understand why the theory of computer science is so important to understanding the application of computer science. By combining the three tools together and understanding how three tools work together, we can achieve an optimized software stack to solve our science.


Parallel I/O Best Practices
Presenter: Robert Latham, Argonne National Laboratory
Abstract:
High-performance computing turns compute challenges into I/O challenges. In this talk we will survey the I/O libraries and tools available to computational scientists to effectively meet their data challenges.
MPI-IO provides a foundation for higher-level libraries, so we'll spend a bit of time discussing the optimizations available. Then we will cover Parallel-NetCDF and HDF5, two of the more popular application-oriented libraries. Finally we will discuss the Darshan characterization tool to help make sense of what is going on behind the scenes with your application's I/O requests.