Developer Guide

Contents

Linking with Threading Libraries

Intel® oneAPI Math Kernel Library
threading layer defines how
Intel® oneAPI Math Kernel Library
functions utilize multiple computing cores of the system that the application runs on. You must link your application with one appropriate
Intel® oneAPI Math Kernel Library
library in this layer, as explained below. Depending on whether this is a threading or a sequential library,
Intel® oneAPI Math Kernel Library
runs in a parallel or sequential mode, respectively.
In the
parallel mode
,
Intel® oneAPI Math Kernel Library
utilizes multiple processor cores available on your system, uses the OpenMP*
or Intel TBB
threading technology, and requires a proper
threading
run-time library (RTL) to be linked with your application. Independently of use of
Intel® oneAPI Math Kernel Library
, the application may also require a threading RTL. You should link not more than one
threading
RTL to your application.
Threading RTLs are
provided by your compiler.
Intel® oneAPI Math Kernel Library
provides several threading libraries, each dependent on the
threading
RTL of a certain compiler, and your choice of the
Intel® oneAPI Math Kernel Library
threading library must be consistent with the
threading
RTL that you use in your application.
The OpenMP RTL of the Intel® compiler is the
libiomp5
.dylib
library, located under
<parent directory>
/
compiler
/
lib
.
This RTL is compatible with the GNU* compilers (gcc and gfortran).
You can find additional information about the Intel OpenMP RTL at https://www.openmprtl.org.
The Intel TBB RTL of the Intel® compiler is the
lib
tbb
.dylib
library, located under
<parent directory>
/tbb/lib
. You can find additional information about the Intel TBB RTL at https://www.threadingbuildingblocks.org.
In the
sequential mode
,
Intel® oneAPI Math Kernel Library
runs unthreaded code, does not require an
threading
RTL, and does not respond to environment variables and functions controlling the number of threads. Avoid using the library in the sequential mode unless you have a particular reason for that, such as the following:
  • Your application needs
    a threading
    RTL that none of
    Intel® oneAPI Math Kernel Library
    threading libraries is compatible with
  • Your application is already threaded at a top level, and using parallel
    Intel® oneAPI Math Kernel Library
    only degrades the application performance by interfering with that threading
  • Your application is intended to be run on a single thread, like a message-passing Interface (MPI) application
It is critical to link the application with the proper RTL. The table below explains what library in the
Intel® oneAPI Math Kernel Library
threading layer and what
threading
RTL you should choose under different scenarios:
Application
Intel® oneAPI Math Kernel Library
RTL Required
Uses OpenMP
Compiled with
Execution Mode
Threading Layer
no
any compiler
parallel
Static linking:
lib
mkl_intel_
thread.
a
Dynamic linking:
lib
mkl_intel_
thread
.dylib
libiomp5
.
dylib
no
any compiler
parallel
Static linking:
lib
mkl_tbb_
thread.
a
Dynamic linking:
lib
mkl_tbb_
thread
.dylib
lib
tbb
.
dylib
no
any compiler
sequential
Static linking:
lib
mkl_
sequential.
a
Dynamic linking:
lib
mkl_
sequential
.dylib
none
yes
Intel compiler
parallel
Static linking:
lib
mkl_intel_
thread.
a
Dynamic linking:
lib
mkl_intel_
thread
.dylib
libiomp5
.
dylib
yes
GNU compiler
parallel
Static linking:
libmkl_intel_
thread.a
Dynamic linking:
libmkl_intel_
thread.dylib
libiomp5.dylib
yes
any other compiler
parallel
Not supported. Use
Intel® oneAPI Math Kernel Library
in the sequential mode.
For the sequential mode, add the POSIX threads library
(libpthread)
to your link line because the
libmkl_sequential.a
and
libmkl_sequential.dylib
libraries depend on
libpthread
.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.
Notice revision #20201201

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.