Intel® Math Kernel Library

Calling Python Developers - High performance Python powered by Intel MKL is here!

We are introducing a Technical Preview of Intel® Distribution of Python*, with packages such as NumPy* and SciPy* accelerated using Intel MKL. Python developers can now enjoy much improved performance of many mathematical and linear algebra functions, with up to ~100x speedup in some cases, comparing to the vanilla Python distributions. The technical preview is available for everybody at no cost. Click here to register and download.

Intel Math Kernel Library for Free – requires registration, includes forum support, and permits royalty-free use

Intel MKL is a popular math library used by many to create fast and reliable applications in science, engineering, and finance. Do you know it is now available for free (at no cost)? The community licensing program gives anyone, individuals or organizations, free license for the latest version of Intel MKL. There is no royalty for distributing the library in an application. The only restrictions are:

Intel® Math Kernel Library 11.3 update 1 is now available

Intel® Math Kernel Library (Intel® MKL) is a highly optimized, extensively threaded, and thread-safe library of mathematical functions for engineering, scientific, and financial applications that require maximum performance. Intel MKL 11.3 Update 1 packages are now ready for download. Intel MKL is available as part of the Intel® Parallel Studio XE and Intel® System Studio .

Intel® MKL Cookbook Recipes

Intel MKL Users,

We would like to Introduce a new feature Intel® MKL Cookbook, an online Document with recipes for assembling Intel MKL routines for solving complex problems.Please give us your valuable feedback on these Cookbook recipes, and let us know if you want us to include more recipes and/or improve existing recipes.

Thank you for Evaluating

Intel MKL Team

Forum poll: Intel MKL and threading

Intel MKL users,

We would like to hear from you how you are using Intel MKL with threading. Do you use the parallel or sequential MKL? How do your multithreaded applications use MKL? We would appreciate you to complete a short survey. It takes no more than 5 minutes. Your feedback will help us to make Intel MKL a better product. Thanks!

Survey link:


Eigenvalue routine heevr goes wrong very weirdly

I am trying to make use of the official example code of heevr routine in LAPACK. Once I changed the range parameter from 'V' to 'A' and commented out il and iu, the first two eigenvalues wrongly showed zero. Below is my tinily modified example code. I used icpc 13.1.3 and run in Linux. Thanks in advance.

#include <stdlib.h>
#include <stdio.h>
#include "mkl_lapacke.h"

Rank-1 update to LU matrices


I have a matrix Y for which I need LU factors that I can get using DSS interface Routines like dss_factor_complex. However, the diagonal entries of this matrix need to be updated after each iteration of a loop that gets me LU factors.

Since Y is very large, and LU factorization will take a lot of computational effort. Is there a way to update LU factors when Y updates without having to do LU factorization on every iteration?

I am trying to do this:

iteration1: Y (original) > reorder > factor (LU)> solve

Using MKL 11.3 with Intel Parallel Studio 2015 XE (Windows)

I am using MKL with Visual Studio and Intel Parallel Studio 2015

For various reasons I want to stay with the Intel Compiler 2015, but I want to use MKL 11.3. The Intel integrations with Visual Studio make using MKL or IPP or TBB very easy, You just select them in the "Intel performance libraries" option. The problem is that when using Intel Compiler 2015, the MKL that will be used for compiling and linking is MKL 11.2.

Is there a 'better' way to select MKL 11.3 apart from using a manual include and link path in my projects.

Iscriversi a Intel® Math Kernel Library