Intel® Math Kernel Library for Deep Learning Networks: Part 1–Overview and Installation

Learn how to install and build the library components of the Intel MKL for Deep Neural Networks.
Authored by Bryan B. (Intel) Last updated on 03/11/2019 - 13:17

Intel® Math Kernel Library for Deep Neural Networks: Part 2 – Code Build and Walkthrough

Learn how to configure the Eclipse* IDE to build the C++ code sample, along with a code walkthrough based on the AlexNet deep learning topology for AI applications.
Authored by Bryan B. (Intel) Last updated on 05/23/2018 - 11:00
Blog post

英特尔和 Facebook* 共同协作,在英特尔 CPU 上提高 Caffe2 的性能


Authored by Andres Rodriguez (Intel) Last updated on 05/08/2018 - 09:38

Intel® Math Kernel Library Improved Small Matrix Performance Using Just-in-Time (JIT) Code Generation for Matrix Multiplication (GEMM)

    The most commonly used and performance-critical Intel® Math Kernel Library (Intel® MKL) functions are the general matrix multiply (GEMM) functions.

Authored by Gennady F. (Blackbelt) Last updated on 03/21/2019 - 03:01

Maximize TensorFlow* Performance on CPU: Considerations and Recommendations for Inference Workloads

This article will describe performance considerations for CPU inference using Intel® Optimization for TensorFlow*
Authored by Nathan Greeneltch (Intel) Last updated on 07/31/2019 - 12:11

Intel® CPU Outperforms NVIDIA* GPU on ResNet-50 Deep Learning Inference

Intel Xeon processor outperforms NVidia's best GPUs on ResNet-50.
Authored by Haihao Shen (Intel) Last updated on 05/20/2019 - 15:58

最大限度提升 CPU 上的 TensorFlow* 性能:推理工作负载的注意事项和建议

本文将介绍使用面向 TensorFlow 的英特尔® 优化* 进行 CPU 推理的性能注意事项
Authored by Nathan Greeneltch (Intel) Last updated on 08/09/2019 - 02:02

Caffe* Optimized for Intel® Architecture: Applying Modern Code Techniques

This paper demonstrates a special version of Caffe* — a deep learning framework originally developed by the Berkeley Vision and Learning Center (BVLC) — that is optimized for Intel® architecture.
Authored by Last updated on 10/15/2019 - 15:30

Migrating Applications from Knights Corner to Knights Landing Self-Boot Platforms

While there are many different programming models for the Intel® Xeon Phi™ coprocessor (code-named Knights Corner (KNC)), this paper lists the more prevalent KNC programming models and further discusses some of the necessary changes to port and optimize KNC models for the Intel® Xeon Phi™ processor x200 self-boot platform.
Authored by Michael Greenfield (Intel) Last updated on 10/15/2019 - 16:40