英特尔® 数学核心函数库

Announcing new open source project Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN)

Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) is now available on the Github (https://github.com/01org/mkl-dnn) as an open source performance library for Deep Learning (DL) applications intended for acceleration of DL frameworks on Intel® architecture. Intel® MKL-DNN includes highly vectorized and threaded building blocks to implement convolutional neural networks (CNN) with C and C++ interfaces.

Intel® MKL 11.3.3 patch

There are two listed below limitations with Intel® Math Kernel Library (Intel® MKL) 11.3 Update 3 which were discovered recently. The official fix of these issues will be available the nearest update Intel MKL 11.3.4.

If you require an immediate Intel MKL update to address these issues, please submit a ticket at Intel Premier Support (https://premier.intel.com) for the Intel MKL product.

Known Limitations: 

  • FreeBSD*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 10
  • Microsoft Windows* 8.x
  • Unix*
  • Fortran
  • 高级
  • 入门级
  • 中级
  • 英特尔® 数学核心函数库
  • Intel® Advanced Vector Extensions
  • sgemm
  • DGEMM
  • Intel® AVX2
  • MKL PARDISO
  • Intel® Math Kernel Library 11.3 Update 4 is now available

    Intel® Math Kernel Library 11.3 Update 4 is now available

    Intel® Math Kernel Library (Intel® MKL) is a highly optimized, extensively threaded, and thread-safe library of mathematical functions for engineering, scientific, and financial applications that require maximum performance.

    Deep Neural Network extensions for Intel MKL

        Deep neural network (DNN) applications grow in importance in various areas including internet search engines, retail and medical imaging. Intel recognizes importance of these workloads and is developing software solutions to accelerate these applications on Intel Architecture that will become available in future versions of Intel® Math Kernel Library (Intel® MKL) and Intel® Data Analytics Acceleration Library (Intel® DAAL).

    While we are working on new functionality we published a series of articles demonstrating DNN optimizations with Caffe framework and AlexNet topology:

    Calling Python Developers - High performance Python powered by Intel MKL is here!

    We are introducing a Technical Preview of Intel® Distribution of Python*, with packages such as NumPy* and SciPy* accelerated using Intel MKL. Python developers can now enjoy much improved performance of many mathematical and linear algebra functions, with up to ~100x speedup in some cases, comparing to the vanilla Python distributions. The technical preview is available for everybody at no cost. Click here to register and download.

    How to run Intel® Optimized MP LINPACK Benchmark on KNL platform?

    My KNL platform is based on Intel(R) Xeon Phi(TM) CPU 7210 @ 1.30GHz, 1 node, 64 cores,64GB memory. I have some problems in linpack benchmark.

    Before I use Intel® Optimized MP LINPACK Benchmark for Clusters, I have used HPL 2.2 and Intel Optimized MP LINPACK Benchmark. In HPL 2.2 and Intel Optimized MP LINPACK Benchmark, the result is bad. The highest result is 486 Gflops when I use HPL 2.2 and 683.6404 Gflops when I use Intel Optimized MP LINPACK Benchmark. However, the theoretical peak performance is 1*64*1.3*32=2662.4 Gflops. 

    订阅 英特尔® 数学核心函数库