Parallelism in Extended Eigensolver Routines

How you achieve parallelism in Extended Eigensolver routines depends on which interface you use. Parallelism (via shared memory programming) is not explicitly implemented in Extended Eigensolver routines within one node: the inner linear systems are currently solved one after another.

  • Using the Extended Eigensolver RCI interfaces, you can achieve parallelism by providing a threaded inner system solver and a matrix-matrix multiplication routine. When using the RCI interfaces, you are responsible for activating the threaded capabilities of your BLAS and LAPACK libraries most likely using the shell variable OMP_NUM_THREADS.

  • Using the predefined Extended Eigensolver interfaces, parallelism can be implicitly obtained within the shared memory version of BLAS, LAPACK or Intel MKL PARDISO. The shell variable MKL_NUM_THREADS can be used for automatically setting the number of threads (cores) for BLAS, LAPACK, and Intel MKL PARDISO.

For more complete information about compiler optimizations, see our Optimization Notice.