Developer Reference

Contents

Performance Enhancements

The
Intel® oneAPI
Math Kernel Library
has been optimized by exploiting both processor and system features and capabilities. Special care has been given to those routines that most profit from cache-management techniques. These especially include matrix-matrix operation routines such as
dgemm()
.
In addition, code optimization techniques have been applied to minimize dependencies of scheduling integer and floating-point units on the results within the processor.
The major optimization techniques used throughout the library include:
  • Loop unrolling to minimize loop management costs
  • Blocking of data to improve data reuse opportunities
  • Copying to reduce chances of data eviction from cache
  • Data prefetching to help hide memory latency
  • Multiple simultaneous operations (for example, dot products in
    dgemm
    ) to eliminate stalls due to arithmetic unit pipelines
  • Use of hardware features such as the SIMD arithmetic units, where appropriate
These are techniques from which the arithmetic code benefits the most.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.
Notice revision #20201201

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.