Using Intel® Math Kernel Library with MathWorks* MATLAB* on Intel® Xeon Phi™ Coprocessor System


This guide is intended to help developers use the latest version of Intel® Math Kernel Library (Intel® MKL) with MathWorks* MATLAB* on Intel® Xeon Phi™ Coprocessor System.

Intel MKL is a computational math library designed to accelerate application performance and reduce development time. It includes highly optimized and threaded dense and sparse Linear Algebra routines, Fast Fourier transforms (FFT) routines, Vector Math routines, Statistical functions and some others for Intel processors and coprocessors.

MATLAB is an interactive software programming environment for numerical computations and visualization. Internally MATLAB uses Intel MKL Basic Linear Algebra Subroutines (BLAS) and Linear Algebra package (LAPACK) routines to perform the underlying computations when running on Intel processors.

Intel MKL now includes a new Automatic Offload (AO) feature that enables computationally intensive Intel MKL functions to offload partial workload to attached Intel Xeon Phi coprocessors automatically and transparently.

As a result, MATLAB performance can benefit from Intel Xeon Phi coprocessors via the Intel MKL AO feature when problem sizes are large enough to amortize the cost of transferring data to the coprocessors. The article describes how to enable Intel MKL AO when Intel Xeon Phi coprocessors are present within a MATLAB computing environment.

Note: Changing  the default system configuration to enable configuration of the Intel MKL AO within a MATLAB computing environment may not be supported by MathWorks. Please contact MathWorks directly to confirm current of future support for Intel Xeon Phi coprocessors.

* MathWorks* and MATLAB*are trademarks or registered trademarks of The MathWorks, Inc.


Prior to getting started, obtain access to the following software and hardware:

  1. The Latest Version of Intel MKL (v. 11.2 Update 3) or Intel® Composer XE, which includes the Intel® C/C++ Compiler, Intel MKL, InteL IPP and Intel TBB libraries, available from, or register at to get a free 30-day evaluation copy
  2. The Latest Version of MATLAB* available from ( MATLAB R2015a )
  3. An Intel Xeon Phi Coprocessor Development System as described at

The 64bit version of Intel MKL and MATLAB should be installed at least on the development system.This article was created based on MATLAB R2015a and Intel MKL for Windows* 11.2 update 3 on the system

The below is the outline of the steps performed.  Click here  to get the whole article.

Steps :

Intel MKL has supported for Intel Xeon Phi coprocessor since release 11.0 for Linux OS, and since release 11.1 for Windows OS

Step 1: Determine which version of Intel MKL is used within MATLAB via the MATLAB command “version -blas”

  • Intel MKL version 11.0.5 is used within MATLAB R2014a
  • Intel MKL version 11.1.1  is used by MATLAB R2015a

example :    

version -release ans = 2015a

version -blas  ans = Intel(R) Math Kernel Library Version 11.1.1 Product Build 20131010 for Intel(R) 64 architecture applications

Step 2: Enable Intel MKL Automatic Offload (AO) in MATLAB via MKL_MIC_ENABLE 

  • Set BLAS_VERSION=mkl_rt.dll; LAPACK_VERSION=mkl_rt.dll
  • Set MKL_MIC_MAX_MEMORY=16G (**)  ;   set MKL_MIC_ENABLE=1      

Step 3: Verify the Intel MKL version and ensure that AO is enabled on the Intel Xeon Phi coprocessors

  • Run version -blas and version –lapack, getenv(‘MKL_MIC_ENABLE’) commands and check the output list

Step 4: Compare performance 

  • Accelerate the common used matrix multiply A*B in MATLAB
  • Accelerate the BLAS function dgemm() in MATLAB (Optional, click Download Button to get matrixMultiplyM.c file)


Intel MKL provides automatic offload (AO) feature for the Intel Xeon Phi coprocessor. With AO feature, certain MKL function can transfer part of the computation to Intel Xeon Phi coprocessor automatically. When problem sizes are large enough to amortize the cost of data transferring, the MKL functions performance can benefit from using both the host CPU and the Intel Xeon Phi coprocessor for computation. Because offloading happens transparently in AO, third-party software that uses Intel MKL functions can automatically benefit from this feature, easily making them to run faster on systems with Intel Xeon Phi coprocessor.

The article describes how to enable Intel MKL AO for MathWorks MATLAB on Intel Xeon Phi coprocessors system. The general steps are as below :

1.   Source environment using  

  > or intel64

2.    Enable Intel MKL AO option  

>Set BLAS_VERSION=mkl_rt.dll; set LAPACK_VERSION=mkl_rt.dll

3.    Run matlab.exe 

>"C:\Program Files\MATLAB\R2014b\bin\matlab.exe"



A simple test shows that on one system with two Intel Xeon phi coprocessors, the common used matrix multiplication within MATLAB(C=A*B) achieves a 2.9 times speedup when Intel MKL AO is enabled, comparing to doing the same computation on the cpu only.

Click below link to get the whole article and test code.

* MathWorks* and MATLAB* are trademarks or registered trademarks of The MathWorks, Inc.

** - MKL_MIC_MAX_MEMORY specifiecs the  maximum coprocessor memory reserved for AO computations on all of the Intel Xeon Phi coprocessors on the system. Please check how much RAM available on your specific version of coprocessor before set this Environment.

Unter der Lizenz Intel Sample Source Code License Agreement stehen Downloads zur Verfügung. Jetzt herunterladen
Einzelheiten zur Compiler-Optimierung finden Sie in unserem Optimierungshinweis.