Intel® Math Kernel Library

Announcing new open source project Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN)

Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) is now available on the Github ( as an open source performance library for Deep Learning (DL) applications intended for acceleration of DL frameworks on Intel® architecture. Intel® MKL-DNN includes highly vectorized and threaded building blocks to implement convolutional neural networks (CNN) with C and C++ interfaces.

Intel® MKL 11.3.3 patch

There are two listed below limitations with Intel® Math Kernel Library (Intel® MKL) 11.3 Update 3 which were discovered recently. The official fix of these issues will be available the nearest update Intel MKL 11.3.4.

If you require an immediate Intel MKL update to address these issues, please submit a ticket at Intel Premier Support ( for the Intel MKL product.

Known Limitations: 

  • FreeBSD*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 10
  • Microsoft Windows* 8.x
  • Unix*
  • Fortran
  • Advanced
  • Beginner
  • Intermediate
  • Intel® Math Kernel Library
  • Intel® Advanced Vector Extensions
  • sgemm
  • Intel® AVX2
  • Intel® Math Kernel Library 11.3 Update 4 is now available

    Intel® Math Kernel Library 11.3 Update 4 is now available

    Intel® Math Kernel Library (Intel® MKL) is a highly optimized, extensively threaded, and thread-safe library of mathematical functions for engineering, scientific, and financial applications that require maximum performance.

    Deep Neural Network extensions for Intel MKL

        Deep neural network (DNN) applications grow in importance in various areas including internet search engines, retail and medical imaging. Intel recognizes importance of these workloads and is developing software solutions to accelerate these applications on Intel Architecture that will become available in future versions of Intel® Math Kernel Library (Intel® MKL) and Intel® Data Analytics Acceleration Library (Intel® DAAL).

    While we are working on new functionality we published a series of articles demonstrating DNN optimizations with Caffe framework and AlexNet topology:

    Calling Python Developers - High performance Python powered by Intel MKL is here!

    We are introducing a Technical Preview of Intel® Distribution of Python*, with packages such as NumPy* and SciPy* accelerated using Intel MKL. Python developers can now enjoy much improved performance of many mathematical and linear algebra functions, with up to ~100x speedup in some cases, comparing to the vanilla Python distributions. The technical preview is available for everybody at no cost. Click here to register and download.

    Suppressing error messages.

    I have been attempting to replace the random number generator I have in my code with MKL's generator.

    In one subroutine, I would like to check if a specific stream, for example, type(vsl_stream_state) rng, is initialized. I have not found a straightforward way of doing this with MKL functions. As an alternative, I wanted to use vslGetStreamStateBrng. If the returned value is VSL_ERROR_NULL_PTR then rng is not initialized.

    Newbie question : is there a way to estimate the memory needed for cluster Pardiso


    Looking at the SMP version of pardiso I may be able to roughly get the maximum memory from iparm[15]-[17] (, but these are not used for the cluster version ?

    Is there a way to get the estimated memory needed for the solve phase, per process or total, for example, if not possible before the factorization phase ?



    Subscribe to Intel® Math Kernel Library