The computer learning code Caffe* has been optimized for Intel® Xeon Phi™ processors. This article provides detailed instructions on how to compile and run this Caffe* optimized for Intel® architecture to obtain the best performance on Intel Xeon Phi processors.
In continued efforts to optimize Deep Learning workloads on Intel® architecture, our engineers explore various paths leading to the maximum performance.
This document is designed to help users get started writing code and running MPI applications using the Intel® MPI Library on a development platform that includes the Intel® Xeon Phi™ processor.
This paper introduces the Artificial Intelligence (AI) community to Intel® optimization for TensorFlow* on Intel® Xeon® and Intel® Xeon Phi™ processor-based CPU platforms.
Get recipes for installing development tools and libraries on various platforms for the Python library.
Boosting Deep Learning Training & Inference Performance on Intel® Xeon® and Intel® Xeon Phi™ ProcessorsIn this work we present how, without a single line of code change in the framework, we can further boost the performance for deep learning training by up to 2X and inference by up to 2.7X on top of the current software optimizations available from open source TensorFlow* and Caffe* on Intel® Xeon® processors.
This case study evaluates the ability of TensorFlow* Object Detection API to solve a real-time problem such as traffic light detection on Intel® Xeon® processor-based CPU machines.
On November 7, 2017, UC Berkeley, U-Texas, and UC Davis researchers published their results training ResNet-50* in a record time (as of the time of their publication) of 31 minutes and AlexNet* in a record time of 11 minutes on CPUs to state-of-the-art accuracy. These results were obtained on Intel® Xeon® Scalable processors (formerly codename Skylake-SP).