Python accelerated (using Intel® MKL)

By James R.,

Published:01/03/2016   Last Updated:01/03/2016

Python can be accelerated by having the numerical libraries, NumPy and SciPy, use the Intel® Math Kernel Library (Intel® MKL).  This requires no change to your Python application, and instantly optimizes performance on Intel processors, including Intel® Xeon® processors and Intel® Xeon Phi™ processors (codenamed Knights Landing).  Note: Intel MKL is supported as part of several Intel products, and is also is available free to anyone, under a community license.

There are several ways to do this, the easiest being simply to use a distribution which already optimizes Python libraries with Intel MKL.

Here is a list of distributions, which are available for free, which offer accelerated Python performance:

You can also build the libraries yourself to use Intel MKL.  Instructions for doing so, along with other performance oriented tuning advice/tips, can be found at https://software.intel.com/runtimes/python.  For offloading to Intel Xeon Phi coprocessors, you may be interested in pyMIC - read about it at https://software.intel.com/en-us/articles/pymic-a-python-offload-module-for-the-intelr-xeon-phitm-coprocessor.

 

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804