Evaluation of Massively Parallel Sparse Linear Solvers in Science and Engineering
Seid Koric, University of Illinois at Urbana-Champaign
Usage Details
Seid Koric, Erman Guleryuz, Seong-Mook Cho, Weihao Ge, Shirui Luo, Francois-Henry Rouet, Amir Kazemi, Carlo Janna, Giovanni Isotton, Fuyao Wang, Tao Zang, Ahmed Sameer Khan Mohammed, Qinan ZhouThis project addresses the research goal of increasing the parallel scaling of multifrontal (direct) sparse matrix factorization solvers. Such codes are the default solvers in many scientific and engineering applications. Therefore, increasing the parallel scaling of multifrontal solvers is a critical requirement for expanding the use of High-Performance Computing (HPC). The design of such codes is based on heuristics, since important subproblems can be NP-complete. Through relationships built up over many years of collaboration, NCSA has access to three commercial multifrontal codes, WSMP from IBM-Watson, mf2 from LSTC, and MUMPS, now the product of MUMPS Technologies, an open source start up in France. Our team is uniquely positioned to analyze the scaling of these multifrontal solvers and evaluate the efficacy of the different design choices made by their authors.
This work builds on earlier studies of WSMP by NCSA. Both WSMP and mf2 have been shown to scale to 65,536 cores (8,192 MPI ranks, 8 threads each). Recent work to reduce run time decision making in MUMPS should make it competitive as well. However, to date there has been no comparison of all three codes solving the same sparse matrices, with the same orderings. Therefore, there has been no way to assess the relative merits of the design choices that led to the current state of these important solvers.
Given the popularity of implicit finite element methods, and other numerical methods in science and engineering that depend on multifrontal solvers, the results of this effort will further open the door for the high-fidelity multiphysics modeling of complex structures and assemblies, at real scale, in both industry and academia.