Speaker: Elmoustapha Ould-ahmed-vall, Intel
In this talk, we analyze the performance characteristics of Caffe* and TensorFlow* on Intel® Xeon Phi™ processor x200. It is the latest processor using Intel® Many Integrated Core Architecture (Intel® MIC Architecture). It introduces several state-of-the-art features such as a compute core with two 512-bit vector processing units and an on-chip, high-bandwidth multichannel DRAM (MCDRAM) memory, delivering a theoretical peak performance of 6 TF single precision and 3 TF double precision floating point operations per second.
We give an overview of the DNN framework architectures and describe the usage of Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) APIs in the implementation of different neural network layer computations. We present the details on the integration and performance optimizations of few of the compute intensive layers using Intel® MKL-DNN APIs.