Submitted by Vipin Kumar E K (Intel) on

**NumPy/SciPy Application Note****Step 1 - Overview**

This guide is intended to help current NumPy/SciPy users to take advantage of Intel® Math Kernel Library (Intel® MKL).**NumPy **automatically maps operations on vectors and matrices to the BLAS and LAPACK functions wherever possible. Since Intel® MKL supports these de-facto interfaces, NumPy can benefit from Intel MKL optimizations through simple modifications to the NumPy scripts.

NumPy is the fundamental package required for scientific computing with Python. It consists of:

- a powerful N-dimensional array object
- sophisticated (broadcasting) functions
- tools for integrating C/C++ and Fortran code
- useful linear algebra, Fourier transform, and random number capabilities.

Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data.

For more information on NumPy, please visit http://NumPy.scipy.org/**SciPy **include modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, ODE solvers, and more. The SciPy library depends on NumPy, which provides convenient and fast N-dimensional array manipulation. The SciPy library is built to work with NumPy arrays, and provides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization for python users. Please refer http://www.scipy.org for more details on SciPy.

**Version Information**

This application note was created to help NumPy/SciPy users to make use of the latest versions of Intel MKL on Linux platforms.

The procedures described in this article have been tested for both Python 2.7 and Python 3.4. These have been verified with Intel MKL 11.2, Intel Compilers 15.0, numpy 1.9.2 and scipy 0.15.1.**Step 2 - Downloading NumPy and SciPy Source Code**

The NumPy source code can be downloaded from:

http://www.scipy.org/Download**Prerequisites**

Intel® MKL is bundled with Intel® Parallel Studio XE. If you are compiling with Intel C/C++ and Fortran Compilers,they are also included part of any of the three ( Composer, Professional and Cluster ) Intel Parallel Studio XE editions, .**Step 3 - Configuration**

Use the following commands to **extract the NumPy tar files **from the downloaded NumPy-x.x.x.tar.gz.

$gunzip numpy-x.x.x.tar.gz $tar -xvf numpy-x.x.x.tar

The above will create a directory named numpy-x.x.x

And to extract SciPy, use the below commands

$gunzip scipy-x.x.x.tar.gz $tar -xvf scipy-x.x.x.tar.gz

The scipy-x.x.x directory will be created with extracted files.

You may also get the latest numpy and scipy source from their respective github repositories.

Make sure that C++ and FORTRAN compilers are installed and they are in PATH. Also set LD_LIBRARY_PATH to your compiler (C++ and FORTRAN), and MKL libraries.**Step 4 - Building and Installing NumPy**

Change directory to numpy-x.x.x

Create a site.cfg from the existing one

Edit site.cfg as follows:

Add the following lines to site.cfg in your top level NumPy directory to use Intel® MKL, if you are building on Intel 64 platform, assuming the default path for the Intel MKL installation from the Intel Parallel Studio XE or Intel Composer XE versions:

[mkl] library_dirs = /opt/intel/composer_xe_2015/mkl/lib/intel64 include_dirs = /opt/intel/composer_xe_2015/mkl/include mkl_libs = mkl_rt lapack_libs =

If you are building NumPy for 32 bit, please add as the following

[mkl] library_dirs = /opt/intel/composer_xe_2015/mkl/lib/ia32 include_dirs = /opt/intel/composer_xe_2015/mkl/include mkl_libs = mkl_rt lapack_libs =

Modify self.cc_exe line in numpy/distutils/intelccompiler.py

Modify this line depending on whether you are building 32 bit or 64 bit. For e.g: If you are building 64 bit, please modify this line part of the IntelEM64TCCompiler Class and compiler_type is 'intelem'

self.cc_exe = 'icc -O3 -g -fPIC -fp-model strict -fomit-frame-pointer -openmp -xhost'

Here we use, -O3, optimizations for speed and enables more aggressive loop transformations such as Fusion, Block-Unroll-and-Jam, and collapsing IF statements, -openmp for OpenMP threading and -xhost option tells the compiler to generate instructions for the highest SIMD instruction set available on the compilation host processor. If you are using the ILP64 interface, please add -DMKL_ILP64 compiler flag.

Run icc --help for more information on processor-specific options, and refer Intel Compiler documentation for more details on the various compiler flags.

Modify the the Fortran compiler configuration in numpy-x.x.x/numpy/distutil/fcompiler/intel.py to use the following compiler options for the Intel Fortran Compiler:

For ia32 and Intel64

ifort -xhost -openmp -fp-model strict -fPIC

If you are using the latest source, this is already modified in intel.py. You may explore using other compiler optimization flags.

If you are using ILP64 interface of Intel MKL, please add -i8 flag above. If you are using older versions of Numpy/SciPy, please refer the new **intel.py** for your reference from the latest version of NumPy, which can be replaced to use the above mentioned compiler options.

Compile and install NumPy with the Intel compiler: For Intel64 platforms run:

$python setup.py config --compiler=intelem build_clib --compiler=intelem build_ext --compiler=intelem install

and for the ia32 builds:

$python setup.py config --compiler=intel build_clib --compiler=intel build_ext --compiler=intel install

The difference is, using "intel" for ia32 and intelem" for the intel64.

You may pass

--prefix=<install_dir>

in case, if you want to install in a directory of your choice. In this case, after your successful numpy build, you have to export PYTHONPATH environment variable pointing to your install folder.

$export PYTHONPATH=<install_dir>/lib64/pythonx.x/site-packages

**Build and Install SciPy**

Compile and install SciPy with the Intel Compilers, for 64 bit builds:

$python setup.py config --compiler=intelem --fcompiler=intelem build_clib --compiler=intelem --fcompiler=intelem build_ext --compiler=intelem --fcompiler=intelem install

and for the ia32 builds:

$python setup.py config --compiler=intel --fcompiler=intel build_clib --compiler=intel --fcompiler=intel build_ext --compiler=intel --fcompiler=intel install

**Setup Library path for Intel MKL and Intel Compilers**

If you build NumPY/SciPy for Intel64 bit platforms:

$export LD_LIBRARY_PATH=/opt/intel/composer_xe_2015/mkl/lib/intel64:/opt/intel/composer_xe_2015/lib/intel64:$LD_LIBRARY_PATH

If you build NumPY for ia32 bit platforms:

$export LD_LIBRARY_PATH=/opt/intel/composer_xe_2015/mkl/lib/ia32:/opt/intel/composer_xe_2015/lib/ia32:$LD_LIBRARY_PATH

It is possible that LD_LIBRARY_PATH causes a problem, if you have installed Intel MKL and Intel Composer XE in other directories than the standard ones. The only solution we ha've found that always works is to build Python, NumPy and SciPy inside an environment where you've set the LD_RUN_PATH variable, e.g: for ia32 platform:

$export LD_RUN_PATH=/opt/intel/composer_xe_2013/lib/ia32:/opt/intel/composer_xe_2015/mkl/lib/ia32

**Note:**We recommend users to use arrays with 'C' ordering style which is row-major, which is default than Fortran Style which is column-major, and this is because NumPy uses CBLAS and also to get better performance.**Appendex A: Example:**

Please see below an example Python script for matrix multiplication that you can use Numply installed with Intel MKL which has been provided for illustration purpose.

import numpy as np import time N = 6000 M = 10000 k_list = [64, 80, 96, 104, 112, 120, 128, 144, 160, 176, 192, 200, 208, 224, 240, 256, 384] def get_gflops(M, N, K): return M*N*(2.0*K-1.0) / 1000**3 np.show_config() for K in k_list: a = np.array(np.random.random((M, N)), dtype=np.double, order='C', copy=False) b = np.array(np.random.random((N, K)), dtype=np.double, order='C', copy=False) A = np.matrix(a, dtype=np.double, copy=False) B = np.matrix(b, dtype=np.double, copy=False) C = A*B start = time.time() C = A*B C = A*B C = A*B C = A*B C = A*B end = time.time() tm = (end-start) / 5.0 print ('{0:4}, {1:9.7}, {2:9.7}'.format(K, tm, get_gflops(M, N, K) / tm))

**Appendix B: Performance Comparison**

Please click Examples.py to download the examples for LU, Cholesky and SVD.

Please note all the charts in this article were generated with the Intel MKL 11.1 update 1 version.**Appendix 1:**

*Known issues: *

When the -O3 or -O2(default) compiler flags (more aggressive compiler optimization flags) used for the ifort, one of the scipy tests may fail and it is a known corner case, to avoid this, as a workaround you can use -O1.

**Building with GNU Compiler chain:**

Make modifications to MKL section in the site.cfg as mentioned above. To build numpy and scipy with Gnu compilers, in the site.cfg file, you must link with mkl_rt only and any other linking method will not work.

Export the compiler flags as:

$export CFLAGS="-fopenmp -m64 -mtune=native -O3 -Wl,--no-as-needed" $export CXXFLAGS="-fopenmp -m64 -mtune=native -O3 -Wl,--no-as-needed" $export LDFLAGS="-ldl -lm" $export FFLAGS="-fopenmp -m64 -mtune=native -O3"

Then run the config, build, install commands for both numpy and scipy from their respective source folders.

If you want to use GNU OpenMP instead of Intel OpenMP, you should set MKL_THREADING_LAYER=GNU.

Since both numpy and scipy has the linear algebra functions, users can call either numpy BLAS or scipy BLAS, but not both. Usage of both at the same time is not supported by MKL and may lead to crashes.

When using scipy BLAS, you must set MKL_INTERFACE_LAYER=GNU.

The above environment various MKL_THREADING_LAYER and MKL_INTERFACE_LAYER are supported only from Intel MKL 11.1 update 3 and above.

**Related Link and Trouble Shooting **

Building Numpy/Scipy with Intel® MKL and Intel® Compilers on Windows

## Comments (15)

TopGary B. said on

@Raffaella. I've documented how I installed Numpy/Scipy with openmp/icc/mkl_rt on a source-based GNU/Linux distribution but I can't get the URL past this site's spam filter. If you google `gentoo numpy multiprocessing', it's the first hit. I know of no non-suboptimal alternatives to mkl_rt.

RD said on

Hello,

Could you kindly post instructions for building numpy and scipy with no host-specific optimization? We have a cluster with heterogeneous CPUs (all 64bit). Could you also give an alternative to mkl_rt?

Thanks,

Raffaella.

Vipin Kumar E K... said on

@Gary

Thanks for pointing out the broken link, now it's fixed. Yes, the MKL is not available as a standalone product any more and is part of the suite products.

For students, we still (as of 1/9/2015) offers free C++ tools as mentioned in the education offering page here:

https://software.intel.com/en-us/intel-education-offerings/

--Vipin

Gary B. said on

Thanks for this very useful post! Working as an academic, I was hoping to download a non-commercial version of the MKL library (or rather emerge it using Gentoo's Portage, which requires a licence file). However under the listed pre-requisites, it says:

`PrerequisitesIntel MKL can be obtained from the following options: Download a FREE evaluation version of the Intel MKL product.

Download the FREE non-commercial* version of the Intel MKL product.

All of these can be obtained at: Intel® Math Kernel Library product web page. '

Unfortunately, the link is broken and it seems the MKL is no longer treated an individual Intel product. I realise it does come with Studio/Parallel XE and other products that are not free for academic use. Therefore I'm just posting to confirm whether or not there is presently a free non-commercial version of the Intel MKL product that academics can use to make the most out of NumPy?

Christoph Gohlke said on

Scipy fails many tests on Windows when using the -O2 flag for ifort (the default since numpy 1.8) instead of -O1. See <https://github.com/scipy/scipy/issues/3306>.

Vipin Kumar E K... said on

@John

There is an issue with ifort with optimization flag -O2 as reported by some users.https://github.com/scipy/scipy/issues/3340, but if you use -O1, it should pass, we are working on this error and will fix it soon.

--Vipin

Vipin Kumar E K... said on

Hey talcite!

Thanks for pointing that error, we have fixed the same now.

Vipin

talcite said on

It looks like there's a mistake in the 64bit site.cfg portion of the article.

It currently reads:

library_dirs = /opt/intel/mkl/composer_xe_2013/lib/intel64

and it should read

library_dirs = /opt/intel/composer_xe_2013/lib/intel64

Took me awhile to figure out why it wasn't linking BLAS and LAPACK.

Thanks for the article anyways!

John P. said on

After building both numpy and scipy using the instructions above, I get the error when importing scipy:

ImportError: /usr/lib/atlas/libblas.so.3gf: undefined symbol: _gfortran_st_write_donehttp://www.scipy.org/Installing_SciPy/BuildingGeneral states that I should NOT use ifort when building numpy ... otherwise I'll get this error ...which defeats the point of using the Intel compilers, right?

Any suggestions on how to fix this problem?

Thanks!

John

Craig Finch said on

I used this page as a guide to build NumPy with Intel Composer 2013 and the latest MKL. I would like to clarify the modifications that need to be made to the distutils files:

When you edit

You should then build NumPy with the commandintelccompiler.py, make sure you are editing the section of the file that corresponds to the command-line flag that you will use when you run setup.py. For example, for a 64-bit system, you need to edit the class calledIntelEM64TCCompiler. In the __init__ function, setThen modify the file

numpy/distutils/fcompiler/intelccompiler.pyModify the class that corresponds to the command-line flags you will use to build NumPy. For a 64-bit system, modify the classIntelEM64TFCompiler. Look for the function calledget_flags_archand add the command-line options as follows:Thanks for posting this article! More information can be found on my blog (see link below).

## Pages

## Add a Comment

Top(For technical discussions visit our developer forums. For site or software product issues contact support.)

Please sign in to add a comment. Not a member? Join today