Developer Guide

Contents

What You Need to Know Before You Begin Using the Intel® Math Kernel Library

Target platform
Identify the architecture of your target machine:
  • IA-32 or compatible
  • Intel® 64 or compatible
Reason:
Because
Intel® MKL
libraries are located in directories corresponding to your particular architecture (see
Architecture Support
), you should provide proper paths on your link lines (see
Linking Examples
).
To configure your development environment for the use with
Intel® MKL
, set your environment variables using the script corresponding to your architecture (see Scripts to Set Environment Variables Setting Environment Variables for details).
Mathematical problem
Identify all
Intel® MKL
function domains that you require:
  • BLAS
  • Sparse BLAS
  • LAPACK
  • PBLAS
  • ScaLAPACK
  • Sparse Solver routines
  • Parallel Direct Sparse Solvers for Clusters
  • Vector Mathematics functions (VM)
  • Vector Statistics functions (VS)
  • Fourier Transform functions (FFT)
  • Cluster FFT
  • Trigonometric Transform routines
  • Poisson, Laplace, and Helmholtz Solver routines
  • Optimization (Trust-Region) Solver routines
  • Data Fitting Functions
  • Extended Eigensolver Functions
Reason:
The function domain you intend to use narrows the search in the
Intel® MKL
Developer Reference
for specific routines you need. Additionally, if you are using the
Intel® MKL
cluster software, your link line is function-domain specific (see Working with the Intel® Math Kernel Library Cluster Software ). Coding tips may also depend on the function domain (see Other Tips and Techniques to Improve Performance ).
Programming language
Intel® MKL
provides support for both Fortran and C/C++ programming. Identify the language interfaces that your function domains support (see Appendix A: Intel® Math Kernel Library Language Interfaces Support ).
Reason:
Intel® MKL
provides language-specific include files for each function domain to simplify program development (see Language Interfaces Support_ by Function Domain ).
For a list of language-specific interface libraries and modules and an example how to generate them, see also Using Language-Specific Interfaces with Intel® Math Kernel Library .
Range of integer data
If your system is based on the Intel 64 architecture, identify
whether your application performs calculations with large data arrays (of more than 2
31
-1 elements).
Reason:
To operate on large data arrays, you need to select the ILP64 interface, where integers are 64-bit; otherwise, use the default, LP64, interface, where integers are 32-bit (see Using the ILP64 Interface vs ).
Threading model
Identify whether and how your application is threaded:
  • Threaded with the Intel compiler
  • Threaded with a third-party compiler
  • Not threaded
Reason:
The compiler you use to thread your application determines which threading library you should link with your application. For applications threaded with a third-party compiler you may need to use
Intel® MKL
in the sequential mode (for more information, see Linking with Threading Libraries ).
Number of threads
If your application uses an OpenMP* threading run-time library, determine the number of threads you want
Intel® MKL
to use.
Reason:
By default, the OpenMP* run-time library sets the number of threads for
Intel® MKL
. If you need a different number, you have to set it yourself using one of the available mechanisms. For more information, see Improving Performance with Threading .
Linking model
Decide which linking model is appropriate for linking your application with
Intel® MKL
libraries:
  • Static
  • Dynamic
Reason:
The link
line syntax and
libraries for static and dynamic linking are different. For the list of link libraries for static and dynamic models, linking examples, and other relevant topics, like how to save disk space by creating a custom dynamic library, see Linking Your Application with the Intel® Math Kernel Library .
MPI used
Decide what MPI you will use with the
Intel® MKL
cluster software. You are strongly encouraged to use the latest available version of Intel® MPI.
Reason:
To link your application with ScaLAPACK and/or Cluster FFT, the libraries corresponding to your particular MPI should be listed on the link line (see Working with the Intel® Math Kernel Library Cluster Software ).
Optimization Notice
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804
This notice covers the following instruction sets: SSE2, SSE4.2, AVX2, AVX-512.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804