# Intel® MKL support for largest/smallest Eigenvalue and Sparse SVD Problem

Introduction

Intel MKL Extended Eigensolver functionality [1], based on the accelerated subspace iteration FEAST algorithm, is a high-performance solution for obtaining all the eigenvalues, and optionally all the associated eigenvectors, within a user-specified interval.
To extend the available functionality we propose new routines for finding the K largest/ smallest eigenvalues or singular values of a sparse matrix that are available in the MKL 2019 Beta release.

With the help of new routines users of Extended Eigensolver can obtain a portion of the extremal eigenvalues of a standard/generalized eigenproblem or find the truncated SVD decomposition of a large sparse matrix. To achieve the best convergence for any workload we have implemented two alternative approaches: a subspace projection algorithm based on FEAST[2] and a classical Krylov subspace iteration method[3]. Both approaches belong to the class of projection methods that construct bases to particular subspaces and then obtain the corresponding Ritz values and vectors. In the Krylov subspace technique, dimensions of the subspaces grow as iterations proceed and restart techniques are used to obtain good approximation with small dimension of dense eigenproblem. In contrast, the FEAST-based projection method projects onto subspaces of a fixed dimension that depends only on the number of eigenvalues to find. Each method is beneficial on certain workloads and, for simplicity, we implemented heuristics that decide which algorithm to use. However, users can also specify a preferred approach.

FEAST-based approach

The underlying subspace projection method can be divided into two steps. On the first step, we produce an interval [a,b] that contains the largest/smallest eigenvalues in question using classical and recent methods that estimate eigenvalue counts[4]. On the second step, we approximate the spectral projector to the invariant eigenspace correspondent to [a,b] using the FEAST algorithm.

Filtering techniques are implemented using rational expansion for the eigenvalue problem and Chebyshev polynomials for SVD. In the first method, the projector is constructed by integrating the resolvent of the eigenproblem along a contour in the complex plane enclosing the interval [a, b]. In this case the projector is approximated by a rational function of matrix and it is required to solve a sparse linear system on each iteration of algorithm. In the second method, the resulting projector is expanded as a polynomial function of the matrix. Thus, the hotspot for the algorithm is sparse matrix dense matrix multiplication.

The subspace projection method can be beneficial compared to classic Krylov methods in cases when a large portion of eigenvalues is required, as its convergence does not depend on the number of eigenvalues in the subspace.

Krylov subspace iteration approach

To ensure fast convergence when only a few eigenpairs (singular values) are required we implemented the Krylov-Schur algorithm - an effective and robust restarting scheme. The Krylov-Schur method[3] can be seen as an improvement of the implicitly restarted Arnoldi algorithm which employs reordered Schur decompositions to perform restarts and deflations in a numerically reliable manner.

For best possible performance on Intel Architecture, all linear operations are performed using MKL Inspector-executor Sparse BLAS, MKL PARDISO and MKL LAPACK highly optimized functionality.

Functionality overview

New functionality is a compliment for existing Extended Eigensolver functionality and enables user to solve the following problems:

Eigenvalue problems:

Find all or part of numbers Lambda and corresponding vectors X such that:

AX = Lambda*X,

A = AΤ

(Standard eigenvalue problem)
or

AX= Lambda*BX,

A=AT,  B=BT>0

(Generalized eigenvalue problem)

Singular value problem:

Find all or part of numbers SIGMA and corresponding vectors X such that:

A*ATX=SIGMA*X or ATAX = SIGMA*X

Notes:  Currently only following features are supported:

• problems with real spectrum.
• input matrix in CSR and BSR formats.
• single and double real precisions.
• C and Fortran 90 interfaces.
• Intel(R) TBB and Open MP parallelism.

For examples of usage see:

path-to-mkl/examples/solvers_eec/source/dexample_extremal_ev_c.c – C example for finding largest eigenvalues of standard eigenvalue problem

path-to-mkl/examples/solvers_eec/source/dexample_extremal_gv_c.c – C example for finding largest eigenvalues of  generalized eigenvalue problem

path-to-mkl/examples/solvers_eec/source/dexample_extremal_svd_c.c – C example for finding largest singular values of sparse matrix

path-to-mkl/examples/solvers_eef/source/ dexample_largest_eig_f.f90 – F90 example for all the above functionality

Performance

The chart below shows a performance comparison of SLEPc run times compared to those of the new Extended Eigensolver mkl_sparse_d_ev routine on matrices from the Florida Sparse Matrix collection [5]. Each column represents the ratio Time SLEPc/Time MKL Extended Eigensolver. Three runs were performed for each matrix: finding 1, 200 and 500 of largest eigenvalues. Since SLEPc supports only MPI parallelism, MPI runs on 40 MPI processes were compared to Open MP parallel runs of the Extended Eigensolver with 40 threads.

On the chart below users can see what benefits can be achieved by switching from dense functionality for finding largest eigenvalues to new Extended Eigensolver sparse solution. Performance improvement depends on matrix density and number of eigenvalues to find but can be significant. For sparse solution mkl_sparse_d_ev routine was used. For dense solution a combination of the following LAPACK routines were used: dsytrd, dorgtr, dsteqr.

References

[1] Intel Math Kernel Library Extended Eigensolver

[2] P. T. P. Tang, E. Polizzi, FEAST as a Subspace Iteration EigenSolver Accelerated by Approximate Spectral Projection, SIAM Journal on Matrix Analysis and Applications (SIMAX), 35, 354-390 (2014)

[3] Stewart, G. W. (2001). A Krylov–Schur Algorithm for Large Eigenproblems. SIAM J. Matrix Anal. Appl., 23(3):601–614.

[4] E. Di Napoli, E. Polizzi, Y. Saad, Efficient Estimation of Eigenvalue Counts in an Interval

[5] Suite Sparse Matrix Collection https://sparse.tamu.edu/

For more complete information about compiler optimizations, see our Optimization Notice.