Intel® Math Kernel Library

Intel® MKL 2018 Beta Update 1 is now available

Intel® MKL 2018 Beta is now available as part of the Parallel Studio XE 2018 Beta.

Check the Join the Intel® Parallel Studio XE 2018 Beta program post to learn how to join the Beta program, and the provide your feedback.

What's New in Intel® MKL 2018 Beta Update 1:

BLAS:

  • Addressed an early release buffer issue in threaded *GEMV
  • Improved TBB *GEMM performance for small m and n while k is large

DNN:

Announcing new open source project Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN)

Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) is now available on the Github (https://github.com/01org/mkl-dnn) as an open source performance library for Deep Learning (DL) applications intended for acceleration of DL frameworks on Intel® architecture. Intel® MKL-DNN includes highly vectorized and threaded building blocks to implement convolutional neural networks (CNN) with C and C++ interfaces.

Intel® MKL 11.3.3 patch

There are two listed below limitations with Intel® Math Kernel Library (Intel® MKL) 11.3 Update 3 which were discovered recently. The official fix of these issues will be available the nearest update Intel MKL 11.3.4.

If you require an immediate Intel MKL update to address these issues, please submit a ticket at Intel Premier Support (https://premier.intel.com) for the Intel MKL product.

Known Limitations: 

  • FreeBSD*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 10
  • Microsoft Windows* 8.x
  • Unix*
  • Fortran
  • Advanced
  • Beginner
  • Intermediate
  • Intel® Math Kernel Library
  • Intel® Advanced Vector Extensions (Intel® AVX)
  • sgemm
  • DGEMM
  • Intel® AVX2
  • MKL PARDISO
  • How can I interrupt / abort LAPACK and BLAS methods which do not support callback ?

    I'm computing some SVDs and other time-consuming things using the mkl C libraries.

    I've found that some methods implement a progress call back (https://software.intel.com/en-us/mkl-developer-reference-c-mkl-progress), but that does not seem to be the case for the calls I'm interested in ( _gesvd, _gesdd, _gemm, _imatcopy ).

    HyperThreading and CPU usage

    Hi everyone,

    I tried LAPACKE_dgels​ and change NO thread-nubmer settings at all. I guess the default thread number (the same as phycical core number) is used. As I wathch the CPU usage during the code running, it reach a peak at 50 %. I guess that means using 50% of CPU made the calculation run as fast as it could, and using more than 50% of CPU by hyper-threading only slow it down? Do I understand it right here?

     

     

    .NET Memory Usage - MKL under .NET

    As every .NET developer knows memory usage is managed from Garbage Collector. This layer determines when memory is released and how to reorganize it. It allocates spaces for each thread separately and avoid conflicts.

    For this, we programmers often don’t know exactly what really happen at this level, the details.

    In general, this is enough, because GC has been built in order to permit developer to concentrate at higher levels.

    Subscribe to Intel® Math Kernel Library