What You Need to Know Before You Begin Using the Intel® Math Kernel Library
Target platform
| Identify the architecture of your target machine:
Reason: Because Architecture SupportIntel® oneAPI Math Kernel Library ), you should provide proper paths on your link lines (see Linking Examples). To configure your development environment for the use with Intel® oneAPI Math Kernel Library |
Mathematical problem
| Identify all Intel® oneAPI Math Kernel Library
Reason: The function domain you intend to use narrows the search in the
Intel® oneAPI Math Kernel Library Intel® oneAPI Math Kernel Library |
Programming language
| Intel® oneAPI Math Kernel Library Reason: Intel® oneAPI Math Kernel Library For a list of language-specific interface libraries and modules and an example how to generate them, see also
Using Language-Specific Interfaces with Intel® Math Kernel Library.
|
Range of integer data
| If your system is based on the Intel 64
architecture, identify whether your application performs calculations with large data arrays (of more than 231 -1 elements).
Reason: To operate on large data arrays, you need to select the ILP64 interface, where integers are 64-bit; otherwise, use the default, LP64, interface, where integers are 32-bit (see
Using the ILP64 Interface vs).
|
Threading model
| Identify whether and how your application is threaded:
Reason: The compiler you use to thread your application determines which threading library you should link with your application. For applications threaded with a third-party compiler you may need to use Intel® oneAPI Math Kernel Library |
Number of threads
| If your application uses an OpenMP* threading run-time library, determine the number of threads you want Intel® oneAPI Math Kernel Library Reason: By default, the OpenMP* run-time library sets the number of threads for Intel® oneAPI Math Kernel Library |
Linking model
| Decide which linking model is appropriate for linking your application with Intel® oneAPI Math Kernel Library
Reason: The link
line syntax and libraries for static and dynamic linking are different. For the list of link libraries for static and dynamic models, linking examples, and other relevant topics, like how to save disk space by creating a custom dynamic library, see
Linking Your Application with the Intel® Math Kernel Library.
|
MPI used
| Decide what MPI you will use with the Intel® oneAPI Math Kernel Library Reason: To link your application with ScaLAPACK and/or Cluster FFT, the libraries corresponding to your particular MPI should be listed on the link line (see
Working with the Intel® Math Kernel Library Cluster Software).
|
Optimization Notice
|
---|
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804
|
This notice covers the following instruction sets: SSE2, SSE4.2, AVX2, AVX-512.