Intel® MKL version 2018 Update 3 is now available

Intel® MKL version 2018 Update 3 is now available

Intel® Math Kernel Library (Intel® MKL) is a highly optimized, extensively threaded, and thread-safe library of mathematical functions for engineering, scientific, and financial applications that require maximum performance.

Intel MKL 2018 Update 3 packages are now ready for download.

Intel MKL is available as part of the Intel® Parallel Studio XE and Intel® System Studio. Please visit the Intel® Math Kernel Library Product Page.

Please see What's new in Intel MKL 2018 and in MKL 2018 Update 3 follow this link - https://software.intel.com/en-us/articles/intel-math-kernel-library-release-notes-and-new-features

 

4 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

What’s New in Intel® Math Kernel Library (Intel® MKL) version 2018 Update 3:

  • BLAS

    • Addressed ?TRMM NaN propagation issues on Advanced Vector Extensions 512 (Intel® AVX-512) for 32-bit architectures.
    • Improved performance on small sizes of multithreaded {S,D}SYRK and {C,Z}HERK for Intel® Advanced Vector Extensions 2 (Intel® AVX2)  and Intel® Advanced Vector Extensions 512 (Intel® AVX-512)
  • LAPACK:

    • Added ?POTRF and ?GEQRF optimizations for Intel® Advanced Vector Extensions 2 and  Intel® Advanced Vector Extensions 512 (Intel l®AVX2 and Intel l® AVX-512) instruction sets.
    • Improved the performance of ?GESVD for very small square matrices (N<6).
    • Improved performance of inverse routines ?TRTRI, ?GETRI and ?POTRI.
  • SparseBLAS:

    • Improved the performance of SPARSE_OPTIMIZE, SPARSE_SV and SPARSE_SYPR routines for Intel® TBB threading.
    • Added support of BSR format for the SPARSE_SYPR routine.
  • Library Engineering:

    • Added functionality to write the output of MKL_VERBOSE to a file specified by the user.
    • Enabled optimizations for Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instruction set with support of Vector Neural Network Instructions via MKL_ENABLE_INSTRUCTIONS.

Known Limitations:

When the leading dimension of matrix A is not equal to the number of rows or columns, the MKL_?GEMM_COMPACT functions can return incorrect results when executed on a processor that does not support Intel ® AVX-2 or Intel ® AVX-512 instructions.

I remember seeing a post about taking advantage of MKL in case I want multiply many matrices by the same matrix.
It shows that for that case I can get performance of large matrices multiplications in case of small.

Where can I access it?

you may try to check batch mode option -- cblas_?gemm_batch 

Leave a Comment

Please sign in to add a comment. Not a member? Join today