Numpy/Scipy with Intel® MKL and Intel® Compilers

NumPy/SciPy Application Note

Please note: The application notes is outdated, but keep here for reference. Instead of build Numpy/Scipy with Intel® MKL manually as below, we strongly recommend developer to use  Intel®  Distribution  for  Python* , which has prebuild Numpy/Scipy based on Intel® Math Kernel Library (Intel® MKL) and more.

Please refer to  Intel®  Distribution  for  Python* mainpage

Installing Intel® Distribution for Python* and Intel® Performance Libraries with Anaconda* by :


Step 1 - Overview

This guide is intended to help current NumPy/SciPy users to take advantage of Intel® Math Kernel Library (Intel® MKL). For a prebuilt ready solution, download the Intel® Distribution for Python*.

NumPy automatically maps operations on vectors and matrices to the BLAS and LAPACK functions wherever possible. Since Intel® MKL supports these de-facto interfaces, NumPy can benefit from Intel MKL optimizations through simple modifications to the NumPy scripts.

NumPy is the fundamental package required for scientific computing with Python. It consists of:

  • a powerful N-dimensional array object
  • sophisticated (broadcasting) functions
  • tools for integrating C/C++ and Fortran code
  • useful linear algebra, Fourier transform, and random number capabilities.

Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data.

For more information on NumPy, please visit

SciPy include modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, ODE solvers, and more.  The SciPy library depends on NumPy, which provides convenient and fast N-dimensional array manipulation. The SciPy library is built to work with NumPy arrays, and provides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization for python users. Please refer for more details on SciPy.


Version Information

This application note was created to help NumPy/SciPy users to make use of the latest versions of Intel MKL on Linux platforms.

The procedures described in this article have been tested for both Python 2.7 and Python 3.6.  These have been verified with Intel® MKL 2018, Intel® Compilers 18.0 from Intel® Parallel Studio XE 2018, numpy 1.13.3 and scipy 1.0.0rc2.

Step 2 - Downloading NumPy and SciPy Source Code

The NumPy source code can be downloaded from:


Intel® MKL is bundled with Intel® Parallel Studio XE.  If you are compiling with Intel C/C++ and Fortran Compilers, they are also included as part of any of the three (Composer, Professional and Cluster) Intel Parallel Studio XE editions, .

Step 3 - Configuration

Use the following commands to extract the NumPy tar files from the downloaded NumPy-x.x.x.tar.gz.

$gunzip numpy-x.x.x.tar.gz $tar -xvf numpy-x.x.x.tar

The above will create a directory named numpy-x.x.x

And to extract SciPy, use the below commands

$gunzip scipy-x.x.x.tar.gz $tar -xvf scipy-x.x.x.tar.gz

The scipy-x.x.x directory will be created with extracted files.

You may also get the latest numpy and scipy source from their respective github repositories.

Make sure that C++ and FORTRAN compilers are installed and they are in PATH. Also set LD_LIBRARY_PATH to your compiler (C++ and FORTRAN), and MKL libraries.

Step 4 - Building and Installing NumPy

Change directory to numpy-x.x.x
Create a site.cfg from the existing one

Edit site.cfg as follows:

Add the following lines to site.cfg in your top level NumPy directory to use Intel® MKL, if you are building on Intel 64 platform, assuming the default path for the Intel MKL installation from the Intel Parallel Studio XE or Intel Composer XE versions:

library_dirs = /opt/intel/compilers_and_libraries_2018/linux/mkl/lib/intel64
include_dirs = /opt/intel/compilers_and_libraries_2018/linux/mkl/include
mkl_libs = mkl_rt
lapack_libs =

If you are building NumPy for 32 bit, please add as the following

library_dirs = /opt/intel/compilers_and_libraries_2018/linux/mkl/lib/ia32
include_dirs = /opt/intel/compilers_and_libraries_2018/linux/mkl/include
mkl_libs = mkl_rt
lapack_libs = 


Modify self.cc_exe line in numpy/distutils/

Modify this line depending on whether you are building 32 bit or 64 bit.  For e.g: If you are building 64 bit, please modify this line part of the IntelEM64TCCompiler Class and compiler_type is 'intelem'

mpopt = 'openmp' if v and v < '15' else 'qopenmp'
self.cc_exe = ('icc -O3 -g -fPIC -fp-model strict -fomit-frame-pointer -xhost -{}').format(mpopt)

Here we use, -O3, optimizations for speed and enables more aggressive loop transformations such as Fusion, Block-Unroll-and-Jam, and collapsing IF statements, -openmp for OpenMP threading and -xhost option tells the compiler to generate instructions for the highest SIMD instruction set available on the compilation host processor.  If you are using the ILP64 interface, please add -DMKL_ILP64 compiler flag. 

Run icc --help for more information on processor-specific options, and refer Intel Compiler documentation for more details on the various compiler flags.

Modify the the Fortran compiler configuration in numpy-x.x.x/numpy/distutil/fcompiler/ to use the following compiler options for the Intel Fortran Compiler:

For ia32 and Intel64

mpopt = 'openmp' if v and v < '15' else 'qopenmp'
return ['-xhost -fp-model strict -fPIC -{}'.format(mpopt)]

If you are using the latest source, this is already modified in   You may explore using other compiler optimization flags.

If you are using ILP64 interface of Intel MKL, please add -i8 flag above.  If you are using older versions of Numpy/SciPy, please refer the new for your reference from the latest version of NumPy, which can be replaced to use the above mentioned compiler options.

Compile and install NumPy with the Intel compiler: For Intel64 platforms run:


$python config --compiler=intelem build_clib --compiler=intelem build_ext --compiler=intelem install

and for the ia32 builds:

$python config --compiler=intel build_clib --compiler=intel build_ext --compiler=intel install

The difference is, using "intel" for ia32 and intelem" for the intel64.  

You may pass


in case, if you want to install in a directory of your choice.   In this case, after your successful numpy build, you have to export PYTHONPATH environment variable pointing to your install folder.

$export PYTHONPATH=<install_dir>/lib64/pythonx.x/site-packages

Build and Install SciPy

Compile and install SciPy with the Intel Compilers, for 64 bit builds:

$python config --compiler=intelem --fcompiler=intelem build_clib --compiler=intelem --fcompiler=intelem build_ext --compiler=intelem --fcompiler=intelem install

and for the ia32 builds:

$python config --compiler=intel --fcompiler=intel build_clib --compiler=intel --fcompiler=intel build_ext --compiler=intel --fcompiler=intel install

Setup Library path for Intel MKL and Intel Compilers

If you build NumPY/SciPy for Intel64 bit platforms:

$export LD_LIBRARY_PATH=/opt/intel/compilers_and_libraries_2018/linux/mkl/lib/intel64/:/opt/intel/compilers_and_libraries_2018/linux/lib/intel64:$LD_LIBRARY_PATH

If you build NumPY for ia32 bit platforms:

$export LD_LIBRARY_PATH=/opt/intel/compilers_and_libraries_2018/linux/mkl/lib/ia32/:/opt/intel/compilers_and_libraries_2018/linux/lib/ia32:$LD_LIBRARY_PATH

It is possible that LD_LIBRARY_PATH causes a problem, if you have installed Intel MKL and Intel Composer XE in other directories than the standard ones. The only solution we ha've found that always works is to build Python, NumPy and SciPy inside an environment where you've set the LD_RUN_PATH variable, e.g: for ia32 platform:

$export LD_RUN_PATH=/opt/intel/compilers_and_libraries_2018/linux/mkl/lib/ia32/:/opt/intel/compilers_and_libraries_2018/linux/lib/ia32:$LD_LIBRARY_PATH


Note:We recommend users to use arrays with 'C' ordering style which is row-major, which is default than Fortran Style which is column-major, and this is because NumPy uses CBLAS and also to get better performance.

Appendex A: Example:

Please see below an example Python script for matrix multiplication that you can use Numply installed with Intel MKL which has been provided for illustration purpose.



import numpy as np  
import time   
N = 6000  
M = 10000  
k_list = [64, 80, 96, 104, 112, 120, 128, 144, 160, 176, 192, 200, 208, 224, 240, 256, 384]  
def get_gflops(M, N, K):  
    return M*N*(2.0*K-1.0) / 1000**3  
for K in k_list:  
    a = np.array(np.random.random((M, N)), dtype=np.double, order='C', copy=False)  
    b = np.array(np.random.random((N, K)), dtype=np.double, order='C', copy=False)  
    A = np.matrix(a, dtype=np.double, copy=False)  
    B = np.matrix(b, dtype=np.double, copy=False)  
    C = A*B  
    start = time.time()  
    C = A*B  
    C = A*B  
    C = A*B  
    C = A*B  
    C = A*B  
    end = time.time()  
    tm = (end-start) / 5.0  
    print ('{0:4}, {1:9.7}, {2:9.7}'.format(K, tm, get_gflops(M, N, K) / tm))

Appendix B: Performance Comparison




Please click to download the examples for LU, Cholesky and SVD.

Please note all the charts in this article were generated with the Intel MKL 11.1 update 1 version.

Appendix 1:

Known issues:


  When the -O3 or -O2(default) compiler flags (more aggressive compiler optimization flags) used for the ifort, one of the scipy tests may fail and it is a known corner case, to avoid this, as a workaround you can use -O1. 


Building with GNU Compiler chain:

Make modifications to MKL section in the site.cfg as mentioned above. To build numpy and scipy with Gnu compilers, in the site.cfg file, you must link with mkl_rt only and any other linking method will not work.

Export the compiler flags as:

$export CFLAGS="-fopenmp -m64 -mtune=native -O3 -Wl,--no-as-needed"
$export CXXFLAGS="-fopenmp -m64 -mtune=native -O3 -Wl,--no-as-needed"
$export LDFLAGS="-ldl -lm"
$export FFLAGS="-fopenmp -m64 -mtune=native -O3"

Then run the config, build, install commands for both numpy and scipy from their respective source folders.

If you want to use GNU OpenMP instead of Intel OpenMP, you should set MKL_THREADING_LAYER=GNU.

Since both numpy and scipy has the linear algebra functions, users can call either numpy BLAS or scipy BLAS, but not both. Usage of both at the same time is not supported by MKL and may lead to crashes.

When using scipy BLAS, you must set MKL_INTERFACE_LAYER=GNU.

The above environment various MKL_THREADING_LAYER and MKL_INTERFACE_LAYER are supported only from Intel MKL 11.1 update 3 and above.

Related Link and Trouble Shooting

Building Numpy/Scipy with Intel® MKL and Intel® Compilers on Windows

For more complete information about compiler optimizations, see our Optimization Notice.




Could you kindly post instructions for building numpy and scipy with no host-specific optimization? We have a cluster with heterogeneous CPUs (all 64bit). Could you also give an alternative to mkl_rt?




Thanks for pointing out the broken link, now it's fixed.  Yes, the MKL is not available as a standalone product any more and is part of the suite products.

For students, we still (as of 1/9/2015) offers free C++ tools as mentioned in the education offering page here:


Thanks for this very useful post! Working as an academic, I was hoping to download a non-commercial version of the MKL library (or rather emerge it using Gentoo's Portage, which requires a licence file). However under the listed pre-requisites, it says:


Intel MKL can be obtained from the following options: Download a FREE evaluation version of the Intel MKL product.
Download the FREE non-commercial* version of the Intel MKL product.
All of these can be obtained at: Intel® Math Kernel Library product web page. '

Unfortunately, the link is broken and it seems the MKL is no longer treated an individual Intel product. I realise it does come with Studio/Parallel XE and other products that are not free for academic use. Therefore I'm just posting to confirm whether or not there is presently a free non-commercial version of the Intel MKL product that academics can use to make the most out of NumPy?

Scipy fails many tests on Windows when using the -O2 flag for ifort (the default since numpy 1.8) instead of -O1. See <>.

Hey talcite!

   Thanks for pointing that error, we have fixed the same now.



It looks like there's a mistake in the 64bit site.cfg portion of the article.

It currently reads:
library_dirs = /opt/intel/mkl/composer_xe_2013/lib/intel64

and it should read
library_dirs = /opt/intel/composer_xe_2013/lib/intel64

Took me awhile to figure out why it wasn't linking BLAS and LAPACK.

Thanks for the article anyways!

After building both numpy and scipy using the instructions above, I get the error when importing scipy:
ImportError: /usr/lib/atlas/ undefined symbol: _gfortran_st_write_done states that I should NOT use ifort when building numpy ... otherwise I'll get this error ...which defeats the point of using the Intel compilers, right?

Any suggestions on how to fix this problem?


I used this page as a guide to build NumPy with Intel Composer 2013 and the latest MKL. I would like to clarify the modifications that need to be made to the distutils files:

When you edit, make sure you are editing the section of the file that corresponds to the command-line flag that you will use when you run For example, for a 64-bit system, you need to edit the class called IntelEM64TCCompiler. In the __init__ function, set

self.cc_exe = 'icc -m64 -O3 -g -fPIC -fp-model strict -fomit-frame-pointer -openmp -xhost'
You should then build NumPy with the command
python build --compiler=intelem --fcompiler=intelem

Then modify the file numpy/distutils/fcompiler/ Modify the class that corresponds to the command-line flags you will use to build NumPy. For a 64-bit system, modify the class IntelEM64TFCompiler. Look for the function called get_flags_arch and add the command-line options as follows:

opt = ['-O3', '-openmp', '-fp-model strict', '-fPIC']

Thanks for posting this article! More information can be found on my blog (see link below).

If you are using the latest NumPy/SciPy, you don't need to replace the, since has been updated in the NumPy/SciPy source.


Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.