• 07/14/2017
  • Public Content
Contents

Math Kernel Library for Deep Neural Networks

Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) is an open source performance library for Deep Learning (DL) applications intended for acceleration of DL frameworks on Intel® architecture. Intel® MKL-DNN includes highly vectorized and threaded building blocks to implement convolutional neural networks (CNN) with C and C++ interfaces.
For more information and documentation on Intel® Math Kernel Library see https://01.org/mkl-dnn/. The sources can be downloaded from https://github.com/01org/mkl-dnn.
Examples
Ref-OS-IoT has support for Intel® MKL-DNN pre-integrated, including pre-compiled versions of some of the examples.(https://github.com/01org/mkl-dnn/tree/master/examples).
These examples can easily be compiled and run directly in Ref-OS-IoT using the following commands (names between <> should be replaced by actual name):
For C examples
gcc -Wall <example>.c -lmkldnn -o <cexample> ./<cexample>
For C++ examples
g++ -std=c++11 <example>.cpp -lmkldnn -o <cppexample> ./<cppexample>
 

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804