Skip to Content

Luke Olson

University of Illinois at Urbana-Champaign

Computer and Computation Research

2020

Einar Horn, Dakota Fulp, Jon Calhoun, and Luke Olson (2020): FaultSight: A Fault Analysis Tool for HPC Researchers, Institute of Electrical & Electronics Engineers, 2019 IEEE/ACM 9th Workshop on Fault Tolerance for HPC at eXtreme Scale (FTXS), pp21-30, Denver, Colorado, U.S.A. (held in conjunction with SC '19)

2019

Amanda Bienz, William D. Gropp, and Luke N. Olson (2019): Node Aware Sparse Matrix-Vector Multiplication, Journal of Parallel and Distributed Computing, Academic Press, Vol 130, pp166-178

2018

Amanda Bienz, William D. Gropp, and Luke N. Olson (2018): Improving Performance Models for Irregular Point-to-Point Communication, ACM Press, Proceedings of the 25th European MPI Users' Group Meeting (EuroMPI '18), pp7:1-7:8, Barcelona, Spain
Andrew Reisner, Luke N. Olson, and J. David Moulton (2018): Scaling Structured Multigrid to 500K+ Cores Through Coarse-Grid Redistribution, SIAM Journal on Scientific Computing, Society for Industrial & Applied Mathematics, Vol 40, Num 4, ppC581-C604
Jon Calhoun, Franck Cappello, Luke N. Olson, Marc Snir, and William D. Gropp (2018): Exploring the Feasibility of Lossy Compression for PDE Simulations, International Journal of High Performance Computing Applications, SAGE Publications, pp109434201876203

2017

Jon Calhoun, Marc Snir, Luke N. Olson, and William D. Gropp (2017): Towards a More Complete Understanding of SDC Propagation, ACM Press, Proceedings of the 26th International Symposium on High-Performance Parallel and Distributed Computing (HPDC '17), pp131, Washington, D.C., U.S.A.

2016

William Gropp, Luke N. Olson, and Philipp Samfass (2016): Modeling MPI Communication Performance on SMP Nodes: Is it Time to Retire the Ping Pong Test?, ACM Press, Proceedings of the 23rd European MPI Users' Group Meeting (EuroMPI 2016), pp41-50, Edinburgh, Scotland, U.K.
Amanda Bienz, Robert D. Falgout, William Gropp, Luke N. Olson, and Jacob B. Schroder (2016): Reducing Parallel Communication in Algebraic Multigrid Through Sparsification, SIAM Journal on Scientific Computing, Society for Industrial & Applied Mathematics, Vol 38, Num 5, ppS332-S357
D. Guo, W. Gropp, and L. N. Olson (2016): A Hybrid Format for Better Performance of Sparse Matrix-Vector Multiplication on a GPU, International Journal of High Performance Computing Applications, SAGE Publications, Vol 30, Num 1, pp103-120

2015

Jon Calhoun, Marc Snir, Luke Olson, and Maria Garzaran (2015): Understanding the Propagation of Error Due to a Silent Data Corruption in a Sparse Matrix Vector Multiply, Institute of Electrical & Electronics Engineers, 2015 IEEE International Conference on Cluster Computing, pp541-542, Chicago, Illinois, U.S.A.
J. Calhoun, L. Olson, M. Snir, and W. D. Gropp (2015): Towards a More Fault Resilient Multigrid Solver, Society for Computer Simulation International, HPC '15: Proceedings of the Symposium on High Performance Computing, pp1-8, Alexandria, Virginia, U.S.A.

2014

Jon Calhoun, Luke Olson, and Marc Snir (2014): FlipIt: An LLVM Based Fault Injector for HPC, Springer Science + Business Media, Lecture Notes in Computer Science: Euro-Par 2014, Parallel Processing Workshops, pp547-558, Porto, Portugal
Kris Beckwith, Seth Veitzer, Stephen F. McCormick, John W. Ruge, Luke N. Olson, and Jon C. Calhoun (2014): Fully-Implicit Ultrascale Physics Solvers and Application to Ion Source Modelling, Institute of Electrical & Electronics Engineers (IEEE), 2014 IEEE 41st International Conference on Plasma Sciences (ICOPS) held with 2014 IEEE International Conference on High-Power Particle Beams (BEAMS), pp1-8, Washington, D.C., U.S.A.

2019

Luke Olson, Amanda Bienz, Andrew Reisner (2019): Scalable Line and Plane Solvers, 2019 Blue Waters Annual Report, pp226-227
Luke Olson, Amanda Bienz, William Gropp (2019): Improved Scalability through Node-Aware Communicators, 2019 Blue Waters Annual Report, pp224-225

2018

Luke Olson, Amanda Bienz, Andrew Reisner, Lukas Spies (2018): Scaling Elliptic Solvers via Data Redistribution, Blue Waters annual-book summary slide
William Gropp, Luke Olson, Amanda Bienz, Paul Eller, Ed Karrels (2018): Algorithms for Extreme Scale Systems, 2018 Blue Waters Annual Report, pp184-185
Luke Olson, Amanda Bienz, Andrew Reisner, Lukas Spies (2018): Scaling Elliptic Solvers via Data Redistribution, 2018 Blue Waters Annual Report, pp196-197

2017

Luke Olson (2017): Localizing Communication in Sparse Matrix Operations, Blue Waters annual-book summary slide
Luke Olson (2017): Localizing Communication in Sparse Matrix Operations, 2017 Blue Waters Annual Report, pp180-181

2016

Luke Olson (2016): The Next Generation of Large-Scale Sparse-Matrix Computations, 2016 Blue Waters Annual Report, pp166-168

HPC Wire taps work of two Blue Waters researchers


Dec 18, 2019

The trade journal HPC wire tapped Jon Calhoun and Luke Olson in its December 2019 report on notable new research on high-performance computing community and its related domains. In their paper, "FaultSight: A Fault Analysis Tool for HPC Researchers", the authors present a fault injection analysis tool that they claim can efficiently assist in analyzing HPC application reliability and resilience scheme effectiveness. Calhoun, a former Blue Waters graduate fellow who is an assistant professor at Clemson, and Olson, a professor at the University of Illinois at Urbana-Champaign, wrote the paper with Einar Horn and Dakota Fulp. "FaulrSight" was one of six papers presented at 2019 Workshop on Fault Tolerance for HPC at eXtreme Scale, which took place within the Supecomputing 2019 annual conference, better known as SC '19.


Sources:
 

Blue Waters Illinois allocations awarded to 26 research teams


Mar 7, 2017

Twenty-six research teams at the University of Illinois at Urbana-Champaign have been allocated computation time on the National Center for Supercomputing Application's (NCSA) sustained-petascale Blue Waters supercomputer after applying in Fall 2016. These allocations range from 25,000 to 600,000 node-hours of compute time over a time span of either six months or one year. The research pursuits of these teams are incredibly diverse, ranging anywhere from physics to political science.


Sources: