Skip to Content
Scaling To Petascale Institute
Agenda
Resources
Learning opportunities
Host Sites
Registration
Organizing Institutions

Call For Collaborating Host Sites

FAQ

Scaling to Petascale Institute Resources

The following resources are available to you.

You may optionally earn an "S2PI Digital Badge"

Link to http://moodle.hpc-training.org and create a signon.
 
Within the Scaling to Petascale Institute section of this moodle site, you may earn separate digital badges on MPI, OpenMP, GPU, and KNL.
 
There is also an XSEDE OpenACC badge you may earn.

YouTube Recording Links

Unedited recordings are available:

When the edited recordings are available, links will be posted here.

Submitting Questions

The slack channel was used for submitting written questions to presenters, support staff and other HPC professionals.

  • Team: scalingtopetascale
  • invitation: https://bit.ly/s2pi-slack
  • There are separate channels for each major topic to allow you to post questions relative to the topic
    • general
      • welcome messages
      • Keynote
    • mpi
      • Introduction to MPI
      • Advanced MPI

Presenter Materials

Box link to the presentation materials provided by the presenters: http://bit.ly/s2pi17-slides

Presenter Exercises

Gitlab link to the exercises provided by the presenters: https://gitlab.com/s2pi/s2pi2017

Computing Systems

Blue Waters Computing System

  • Your trainee account is active Monday, June 26, 2017 through Saturday, July 15, 2017.
  • Your account can be used by pointing an ssh client to: traxxx@bwbay.ncsa.illinois.edu 
  • The first access will require a few questions to be answered on our Terms of Use Policy. On subsequent logins you will bounce through bwbay to one of three login nodes.
  • You will find in this welcome package a login sheet. Please await guidance from the instructors on how to use it.
  • These credentials are to be used only for activities associated with the Scaling to Petascale Institute 2017. Use of the Blue Waters system is subject to the terms of use found at: https://bluewaters.ncsa.illinois.edu/terms-of-use
  • You can open a help ticket if you have issues by emailing: help+bw@ncsa.illinois.edu
  • Data transfer for Education and Training Allocations: ? https://bluewaters.ncsa.illinois.edu/education-training-allocation-data-transfer
  • The Blue Waters portal provides a range of info - https://bluewaters.ncsa.illinois.edu/blue-waters

NERSC Computing Systems

  • At NERSC, we have two supercomputers that you’ll be using this week: Edison and Cori.
  • Edison, a Cray X030, consists of 5586 nodes, each with two, 12-core Ivy Bridge Intel processors and 64GB DDR3 memory. For more info on using Edison. please see: http://www.nersc.gov/users/computational-systems/edison/
  • Cori, a Cray XC40, has two types of nodes:
    • There are 2388 “Haswell” nodes, each with two, 16-core Haswell Intel processors and 128GB DDR4 memory, and
    • 9688 “KNL” nodes, each with a 68-core Intel Knight’s Landing (KNL) processors and 16 GB MCDRAM plus 96 GB DDR memory.
  • For more info on using Cori, please see: http://www.nersc.gov/users/computational-systems/cori/
  • Logging In
    • Simply use ssh from a command prompt to log in to Edison or Cori:
    • ssh train401@edison.nersc.gov
    • ssh train401@cori.nersc.gov
  • Running Jobs
    • We use the SLURM batch system. You can use our job script generator https://my.nersc.gov/script_generator.php to create a script that will run on our systems.
    • If your instructors have put in place a reservation for the lesson, you may access that reservation with ——reservation=nameofreservation . Either add this to your script or your job submit line:
    • #SBATCH ——reservation=nameofreservation (batch script)
    • Or
    • sbatch —-reservation=nameofreservation ./mybatchscript
  • Account Life
    • Your account is active from June 30 until Monday, July 3 at noon Pacific time, when we will reset the passwords and wipe the account directories.

TACC Computing System

Information about the Stampede2 system: https://portal.tacc.utexas.edu/user-guides/stampede2

We have reserved 300 nodes on Stampede2 for exclusive use by institute attendees. The slurm reservation id is "S2PI"
 
Students, instructors, and support staff will be able to use the reserved nodes by adding
 
--reservation=S2PI
 
to the sbatch command and submitting to the normal queue.
 
Additionally, a single node with interactive access can be requested by using the "idev" command. This will provide a default 30 minute interactive session on a single node where exercises can be done. Since we were able to secure 1 node for each training account, this may be the easiest way for students to do hands on work on Stampede2.
 

Qwiklabs on Amazon

Everyone who registered at one of the sites is pre-registered. When you login to nvlabs.qwiklab.com (starting Wednesday) you will find the class you registered for (there's actually 5 different ones by location, but you will only see one). If you did not already have a qwiklab account, you will have to create the account using the email used for the registering in the institute.

If you are a walk-in, please have your local host site POC send your email and site name to Lathrop@illinois.edu