Skip to Content

Invited Speakers

Arden L. Bement, Jr.

Podcast: Keynote Arden Bement sits down with NCSA Public Affairs to discuss Blue Waters and the future of HPC

Arden L. Bement, Jr. is an American engineer and scientist and has served in executive positions in government, industry, and academia. He is former Professor of Nuclear Materials, MIT, Director of the DARPA Office of Materials Science, Deputy Under Secretary of Defense for Research and Advanced Technology, Chief Technical Officer of TRW, Member of the National Science Board, Director of the National Institute of Standards and Technology, and Director of the National Science Foundation. In 2010 he returned to Purdue University from government leave to become founding director of the Global Policy Research Institute and Chief Global Affairs Officer. He is currently Davis A. Ross Distinguished Professor Emeritus and Adjunct Professor of the College of Technology at Purdue University. He has been awarded honorary doctorates from seven American and foreign universities; is a member of the National Academy of Engineering and the American Academy for Arts and Sciences; and has been awarded the Order of the Rising Sun with Gold and Silver Star from Japan and the Legion of Merit with rank of Chevalier from France.

View presentation PDF | View presentation video

From Megaflop to Petaflop and Beyond

Abstract: This paper is in two parts: (a) a reprise the three-decade history of NSF's investments in academic supercomputing and communications and (b) a brief look ahead to new developments in the fields of computational S&E and smart control using supercomputers, smart sensors, and high-speed communications. The history starts with the Peter Lax Report, the initial NSF investment in supercomputing centers in 1985, and the provisioning of the NSFnet to build alliance networks between the supercomputing centers and university science and engineering communities at large. Over the three decades supercomputer throughput rates have increased dramatically from 100s of MFLOPs in 1985 to tens of PFLOPS today, making U.S. universities pre-eminent in the world in computational S&E, which is a tribute to both NSF and Centers' dedication and leadership. Looking ahead to the impacts of synaptic computing, high-performance computing coupled with data analysis, and cloud computing coupled with the Internet of Things (IOT), the challenges and opportunities in computational S&E in all sectors of U.S. Society is projected to advance dramatically.

William T.C. Kramer

William T.C. Kramer is director and principal investigator of the Blue Waters project, the Director of the UIUC/NCSA @Scale Program office and a full research professor in the Computer Science Department at the University of Illinois at Urbana-Champaign. Bill is responsible for leading all aspects of the Blue Waters project, a National Science Foundation-funded project at NCSA. Blue Waters is the first sustained petascale computing system. It is the most powerful general purpose computational and data analytics available to open science, system available, and by far the largest system Cray has ever built. It is one of the most powerful resources for the nation's researchers and is the only public Top-5 system in the world that chose not to list on the Top-500 list. Every year, Blue Waters is delivering over 6 billion core*hour equivalents of computational time to the nation's leading science and engineering projects.

Previously Bill was the general manager of the NERSC at Lawrence Berkeley National Laboratory (LBNL) was responsible for all aspects of operations and customer service for NASA's Numerical Aerodynamic Simulator (NAS) supercomputer center. Blue Waters is the 20th supercomputer Kramer deployed and/or manages, deployed and managed large clusters of workstations, five extremely large data repositories, some of the world's most intense networks. He has also been involved with the design, creation and commissioning of six "best of class" HPC facilities. He holds a BS and MS in computer science from Purdue University, an ME in electrical engineering from the University of Delaware, a PhD in computer science at UC Berkeley.  Kramer's research interests include large-scale system performance evaluation, systems and resource management, fault detection and resiliency, and cyber protection. Kramer has awards from NASA, Berkeley Laboratory, the Association for Computing Machinery (ACM) and was named one of HPCWire's "People to Watch" in 2005 and 2013 and Inside HPC first "Rockstar of HPC."

The Need for Sustained High-Performance Computing and Data Analysis in the Extreme-Scale Era

Abstract: Computation and data analysis is intrinsically intertwined in every aspect of science, engineering and research endeavors from theory to experiment. Recently, two community BrainstormHPCD workshops were held to identify the on-going requirements among open science and engineering communities for future high performance computational and data analysis (HPCD) resources and services. These workshops identified a persistent, broad and deep need for high performance resources and services to enable leading edge science and engineering investigations.   The workshops provide insight into the changing requirements of pioneering investigations and explored alternative methods to provide these resources to the frontier research communities. This talk will briefly highlight some of the science successes from the Blue Waters system and then present a summary of the results and conclusions of the two BrainstormHPCD workshops.

View presentation video

Satoshi Matsuoka

Podcast interview

Satoshi Matsuoka received his PhD from the University of Tokyo in 1993. He became a full professor at the Global Scientific Information and Computing Center (GSIC) of Tokyo Institute of Technology (Tokyo Tech / Titech) in April 2001, leading the Research Infrastructure Division Solving Environment Group of the Titech campus. He has pioneered grid computing research in Japan the mid 90s along with his collaborators, and currently serves as sub-leader of the Japanese National Research Grid Initiative (NAREGI) project, that aim to create middleware for next-generation CyberScience Infrastructure. He was also the technical leader in the construction of the TSUBAME supercomputer, which has become the fast supercomputer in Asia-Pacific in June, 2006 at 85 Teraflops (peak, now 111 Teraflops as of March 2009) and 38.18 Teraflops (Linpack, 7th on the June 2006 list) and also serves as the core grid resource in the Titech Campus Grid.

He has been (co-) program and general chairs of several international conferences including ACM OOPSLA'2002, IEEE CCGrid 2003, HPCAsia 2004, Grid 2006, CCGrid 2006/2007/2008, as well as countless program committee positions, in particular numerous ACM/IEEE Supercomputing Conference (SC) technical papers committee duties, including serving as the network area chair for SC2004, SC2008, and was the technical papers chair for SC2009, and the Communities Program Chair for SC2011. He served as a Steering Group member and an Area Director of the Global Grid Forum during 1999-2005, and recently became the steering group member of the Supercomputing Conference.

He has won several awards, including the Sakai award for research excellence from the Information Processing Society of Japan in 1999, and recently received the JSPS Prize from the Japan Society for Promotion of Science in 2006 from his Royal Highness Prince Akishinomiya.

Toward Inevitable Convergence of HPC and Big Data

Abstract: Rapid growth in the use cases and demands for extreme computing and huge data processing is leading to convergence of the two infrastructures. Tokyo Tech.s TSUBAME3.0, a 2016 successor to the highly successful TSUBAME2/2.5, will aim to deploy a series of innovative technologies, including ultra-efficient liquid cooling and power control, petabytes of non-volatile memory, as well as low cost Petabit-class interconnect. In particular our Extreme Big Data (EBD) project is looking at co-design development of convergent system stack given future data and computing workloads. The resulting TSUBAME3 and machines beyond will be an integral part of our national SC/Big Data infrastructure called HPCI (High Performance Computing Infrastructure of Japan), which is similar to XSEDE in scale with about 40 Petaflops of aggregate computing capabilities circa 2015 and expected to embody half-exaflop by 2022. The trend towards convergence is not only strategic however but rather inevitable as the Moores law ends such that sustained growth in data capabilities, not compute, will advance the capacity and thus the overall capacities towards accelerating research and ultimately the industry.

View presentation video

Ed Seidel

Ed Seidel is the director of the National Center for Supercomputing Applications, a distinguished researcher in high-performance computing and relativity and astrophysics, and a Founder Professor in the University of Illinois Department of Physics and a professor in the Department of Astronomy. His previous leadership roles include serving as the senior vice president for research and innovation at the Skolkovo Institute of Science and Technology in Moscow, directing the Office of Cyberinfrastructure and serving as assistant director for Mathematical and Physical Sciences at the U.S. National Science Foundation, and leading the Center for Computation & Technology at Louisiana State University. His research has been recognized by a number of awards, including the 2006 IEEE Sidney Fernbach Award, the 2001 Gordon Bell Prize, and 1998 Heinz-Billing-Award.

Supercomputing in an Era of Big Data and Big Collaboration

Abstract: Supercomputing has reached a level of maturity and capability where many areas of science and engineering are not only advancing rapidly due to computing power, they cannot progress without it.  Detailed simulations of complex astrophysical phenomena, HIV, earthquake events, and industrial engineering processes are being done, leading to major scientific breakthroughs or new products that cannot be achieved any other way. These simulations typically require larger and larger teams, with more and more complex software environments to support them, as well as real world data. But as experiments and observation systems are now generating unprecedented amounts of data, which also must be analyzed via large-scale computation and compared with simulation, a new type of highly integrated environment must be developed where computing, experiment, and data services will need to be developed together. I will illustrate examples from NCSA's Blue Waters supercomputer, and from major data-intensive projects including the Large Synoptic Survey Telescope, and give thoughts on what will be needed going forward.

View presentation video

Steven L. Scott

Podcast interview with Steven Scott

Steve Scott serves as Senior Vice President and Chief Technology Officer, responsible for guiding the long-term technical direction of Cray's supercomputing, storage and analytics products. Dr. Scott rejoined Cray in 2014 after serving as principal engineer in the platforms group at Google and before that as the senior vice president and chief technology officer for NVIDIA's Tesla business unit. Dr. Scott first joined Cray in 1992, after earning his Ph.D. in computer architecture and BSEE in computer engineering from the University of Wisconsin-Madison. He was the chief architect of several Cray supercomputers and interconnects. Dr. Scott is a noted expert in high performance computer architecture and interconnection networks. He holds 35 U.S. patents in the areas of interconnection networks, cache coherence, synchronization mechanisms and scalable parallel architectures. He received the 2005 ACM Maurice Wilkes Award and the 2005 IEEE Seymour Cray Computer Engineering Award, and is a Fellow of IEEE and ACM. Dr. Scott was named to HPCwire's "People to Watch in High Performance Computing" in 2012 and 2005.

Programming for the Next Decade (Perspectives from a Systems Architect)

Abstract: Our computing systems continue to evolve, providing significant challenges to the programming teams managing large, long-lived projects.  Issues include rapidly increasing on-node parallelism, varying forms of heterogeneity, deepening memory hierarchies, growing concerns around resiliency and silent data corruption, and worsening storage bottlenecks. "Big Data" and cloud computing also show signs of impacting the traditional HPC market. In this talk, I'll illustrate the role that technology plays on system design, explore the emerging architectural landscape, and discuss some implications and challenges for programmers targeting future architectures.

View presentation PDF