Developer Guide

  • 2021.1
  • 12/04/2020
  • Public Content
Contents

Layered Model Concept

Intel® oneAPI Math Kernel Library
is structured to support multiple compilers and interfaces, both serial and multi-threaded modes, different implementations of threading run-time libraries, and a wide range of processors. Conceptually
Intel® oneAPI Math Kernel Library
can be divided into distinct parts to support different interfaces, threading models, and core computations:
  1. Interface Layer
  2. Threading Layer
  3. Computational Layer
You can combine
Intel® oneAPI Math Kernel Library
libraries to meet your needs by linking with one library in each part layer-by-layer.
To support threading with different compilers, you also need to use an appropriate threading run-time library (RTL). These libraries are provided by compilers and are not included in
Intel® oneAPI Math Kernel Library
.
The following table provides more details of each layer.
Layer
Description
Interface Layer
This layer matches compiled code of your application with the threading and/or computational parts of the library. This layer provides:
  • LP64 and ILP64 interfaces.
  • Compatibility with compilers that return function values differently.
Threading Layer
This layer:
  • Provides a way to link threaded
    Intel® oneAPI Math Kernel Library
    with supported compilers.
  • Enables you to link with a threaded or sequential mode of the library.
This layer is compiled for different environments (threaded or sequential) and compilers (from Intel
, GNU*,
).
Computational Layer
This layer accommodates multiple architectures through identification of architecture features and chooses the appropriate binary code at run time.
Optimization Notice
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804
This notice covers the following instruction sets: SSE2, SSE4.2, AVX2, AVX-512.

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.