Developer Reference

Contents

Parallelism in Extended Eigensolver Routines

How you achieve parallelism in Extended Eigensolver routines depends on which interface you use. Parallelism (via shared memory programming) is not
explicitly
implemented in Extended Eigensolver routines within one node: the inner linear systems are currently solved one after another.
  • Using the Extended Eigensolver RCI interfaces, you can achieve parallelism by providing a threaded inner system solver and a matrix-matrix multiplication routine. When using the RCI interfaces, you are responsible for activating the threaded capabilities of your BLAS and LAPACK libraries most likely using the shell variable
    OMP_NUM_THREADS
    .
  • Using the predefined Extended Eigensolver interfaces, parallelism can be implicitly obtained within the shared memory version of BLAS, LAPACK or
    Intel® oneAPI Math Kernel Library
    PARDISO. The shell variable
    MKL_NUM_THREADS
    can be used for automatically setting the number of OpenMP threads (cores) for BLAS, LAPACK, and
    Intel® oneAPI Math Kernel Library
    PARDISO.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.
Notice revision #20201201

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.