Skip to Content

Benchmarking and tuning parallel matrix and tensor factorization algorithms

Edgar Solomonik, University of Illinois at Urbana-Champaign

Usage Details

Edgar Solomonik, Edward Hutter

We propose to use Blue Waters to evaluate the performance of parallel matrix and tensor algorithms and library infrastructure. We have an existing allocation on Blue Waters that will continue for another half a year, so we are requesting another half a year of resources. Most of our existing allocation has already been used to evaluate a new communication-avoiding algorithm for QR factorization. In the next allocation cycle, we plan to use Blue Waters to evaluate the efficacy of this algorithm on GPU-nodes of Blue Waters and to evaluate a related algorithm for computing the eigenvalue decomposition of symmetric matrices. Additionally, we hope to use Blue Waters to study the performance of algorithms for tensor decomposition and completion, using the Cyclops library developed by our group. The Cyclops library provides high-level support for distributed-memory tensor algebra operations (also with support for threading and GPU acceleration). Our project will enable debugging and performance improvements for the library. It will also enable evaluation of tensor completion and optimization numerical algorithms developed in student-led efforts using Cyclops. We are currently studying two major families of algorithms for these problems:

  • Pairwise perturbation for alternating least squares algorithms: We have developed a perturbative approximation to the standard approach for CP and Tucker tensor decompositions (and are pursuing its extensions to handle more general tensor networks). These algorithms promise to asymptotically reduce the per-iteration cost of alternating least squares. Initial versions of CP and Tucker decomposition with pairwise perturbation have been developed. We plan to use Blue Waters to evaluate their efficiency for sparse tensor problems and tensors of various structure/conditioning.
  • Gradient-based optimization methods for tensor completion: The problem of completing a tensor (building a tensor decomposition to model a tensor a subset of whose elements have been observed) is an emerging popular approach in machine learning and data mining. We have implemented a suite of approaches for this problem: including alternating least squares with conjugate gradient optimization and batch subgradient optimization using subsampling of the tensor. We plan to use Blue Waters to evaluate these approaches and other variants on real-world tensor datasets and motif tensors induced from graphs.