Developer Guide

  • 2021.1
  • 12/04/2020
  • Public Content

What You Need to Know Before You Begin Using the Intel® Math Kernel Library

Target platform
Identify the architecture of your target machine:
  • IA-32 or compatible
  • Intel® 64 or compatible
Intel® oneAPI Math Kernel Library
libraries are located in directories corresponding to your particular architecture (see
Architecture Support
), you should provide proper paths on your link lines (see
Linking Examples
To configure your development environment for the use with
Intel® oneAPI Math Kernel Library
, set your environment variables using the script corresponding to your architecture (see Scripts to Set Environment Variables Setting Environment Variables for details).
Mathematical problem
Identify all
Intel® oneAPI Math Kernel Library
function domains that you require:
  • BLAS
  • Sparse BLAS
  • Sparse Solver routines
  • Parallel Direct Sparse Solvers for Clusters
  • Vector Mathematics functions (VM)
  • Vector Statistics functions (VS)
  • Fourier Transform functions (FFT)
  • Cluster FFT
  • Trigonometric Transform routines
  • Poisson, Laplace, and Helmholtz Solver routines
  • Optimization (Trust-Region) Solver routines
  • Data Fitting Functions
  • Extended Eigensolver Functions
The function domain you intend to use narrows the search in the
Intel® oneAPI Math Kernel Library
Developer Reference
for specific routines you need. Additionally, if you are using the
Intel® oneAPI Math Kernel Library
cluster software, your link line is function-domain specific (seeWorking with the Intel® Math Kernel Library Cluster Software). Coding tips may also depend on the function domain (see Other Tips and Techniques to Improve Performance).
Programming language
Intel® oneAPI Math Kernel Library
provides support for both Fortran and C/C++ programming. Identify the language interfaces that your function domains support (see Appendix A: Intel® Math Kernel Library Language Interfaces Support).
Intel® oneAPI Math Kernel Library
provides language-specific include files for each function domain to simplify program development (seeLanguage Interfaces Support_ by Function Domain).
For a list of language-specific interface libraries and modules and an example how to generate them, see also Using Language-Specific Interfaces with Intel® Math Kernel Library.
Range of integer data
If your system is based on the Intel 64 architecture, identify
whether your application performs calculations with large data arrays (of more than 2
-1 elements).
To operate on large data arrays, you need to select the ILP64 interface, where integers are 64-bit; otherwise, use the default, LP64, interface, where integers are 32-bit (see Using the ILP64 Interface vs).
Threading model
Identify whether and how your application is threaded:
  • Threaded with the Intel compiler
  • Threaded with a third-party compiler
  • Not threaded
The compiler you use to thread your application determines which threading library you should link with your application. For applications threaded with a third-party compiler you may need to use
Intel® oneAPI Math Kernel Library
in the sequential mode (for more information, seeLinking with Threading Libraries).
Number of threads
If your application uses an OpenMP* threading run-time library, determine the number of threads you want
Intel® oneAPI Math Kernel Library
to use.
By default, the OpenMP* run-time library sets the number of threads for
Intel® oneAPI Math Kernel Library
. If you need a different number, you have to set it yourself using one of the available mechanisms. For more information, seeImproving Performance with Threading.
Linking model
Decide which linking model is appropriate for linking your application with
Intel® oneAPI Math Kernel Library
  • Static
  • Dynamic
The link
line syntax and
libraries for static and dynamic linking are different. For the list of link libraries for static and dynamic models, linking examples, and other relevant topics, like how to save disk space by creating a custom dynamic library, see Linking Your Application with the Intel® Math Kernel Library.
MPI used
Decide what MPI you will use with the
Intel® oneAPI Math Kernel Library
cluster software. You are strongly encouraged to use the latest available version of Intel® MPI.
To link your application with ScaLAPACK and/or Cluster FFT, the libraries corresponding to your particular MPI should be listed on the link line (see Working with the Intel® Math Kernel Library Cluster Software).
Optimization Notice
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804
This notice covers the following instruction sets: SSE2, SSE4.2, AVX2, AVX-512.

Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at