Apache* MXNet* v1.2.0 optimized with Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN)

Introduction

Apache* MXNet community announced the v1.2.0 release of the Apache MXNet deep learning framework. One of the most important features in this release is the Intel optimized CPU backend: MXNet now integrates with Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) to accelerate neural network operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch Normalization, Activation, LRN, Softmax, as well as some common operators: sum and concat. More details are available in the release note and release blog. This article will give more details on how to play it and how much faster v1.2.0 is on CPU platform.

Performance Improvement

Latency optimization

In the deployment environment, the latency always is sensitive so the more specific optimizations are applied to reduce the latency for the better real-time results, especially for the batchsize one.

As the following chart shows, the latency of single picture inference (batchsize one) is significantly decreased.

MX net latency comparison with and without optimizations

Figure 1. NOTE: the latency can be calculated by (1000 * batchsize / throughput) and the unit is ms.

Throughput improvement

For the big batchsize, such as BS=32, the throughput has been improved a lot with Intel optimized backend.

As the following chart shows, the throughput of batchsize=32 is about 23.4-56.9X faster than the original CPU backend.

MX net latency comparison with and without optimizations

Figure 2.

Batch scalability

The new backend shows the good scalability for the batchsize. In below chart, the throughput keeps constant at approximately eight images/second for the original CPU backend.

The new implementation shows very good batch scalability where the throughput is boosted from 83.7 images/second (BS=1) to 199.3 images/second (BS=32) for the resnet-50.

batch scalability from optimized cpu backend

Figure 3.

Raw data

Benchmark script: https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/benchmark_score.py

CMD to reproduce the results:

export KMP_AFFINITY=granularity=fine,compact,1,0
export vCPUs=`cat /proc/cpuinfo | grep processor | wc -l`
export OMP_NUM_THREADS=$((vCPUs / 2))
MXNet 1.2.0 Performance Comparison w/o and w/ Intel Optimizations (images/sec, on 2 sockets SKX-8180)
Batch SizeAlexNetVGG-16inception-bnresnet-50
 mxnetmxnet-mklspeedupmxnetmxnet-mklspeedupmxnetmxnet-mklspeedupmxnetmxnet-mklspeedup
111415.737.82.294.74313.4113.48.58.583.79.8
215.4692.3452.5132.252.913.9187.613.58.5117.513.8
413.4808.760.42.7145.253.813.9283.720.48.7152.917.6
823.598141.72.9156.453.914380.127.28.7186.321.4
1624.51119.445.72.9148.751.313.8449.632.68.7190.321.9
3224.81411.756.92.9134.646.413.8500.536.38.5199.323.4

Installation Guide

Install from PyPI

Install prerequisites: wget and latest pip (if needed)

$ sudo apt-get update
$ sudo apt-get install -y wget python gcc
$ wget https://bootstrap.pypa.io/get-pip.py && sudo python get-pip.py

Install MXNet with Intel MKL-DNN acceleration

MXNet with Intel MKL-DNN backend has been released in 1.2.0.

$ pip install mxnet-mkl==1.2.0 [–user]

Please note that the mxnet-mkl package is built with USE_BLAS=openblas. If you want to leverage the performance boost from MKL blas, please try to install mxnet from source.

Install MXNet without Intel MKL-DNN acceleration

$ pip install mxnet==1.2.0 [–user]

Install from source code

Download MXNet source code from GitHub

$ git clone --recursive https://github.com/apache/incubator-mxnet
$ cd incubator-mxnet
$ git checkout 1.2.0
$ git submodule update --init --recursive

Build with Intel MKL-DNN backend

$ make -j USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl

Note 1: When calling this command, Intel MKL-DNN will be downloaded and built automatically.

Note 2: MKL2017 backend has been removed from MXNet master branch. So users cannot build MXNet with MKL2017 backend from source code anymore.

Note 3: To use MKL as BLAS library, users may need to install Intel® Parallel Studio for best performance.

Note 4: If MXNet cannot find MKLML libraries, please add the MKLML library path to LD_LIBRARY_PATH and LIBRARY_PTH at first.

HW configuration

MachineNeon City
CPU/GPU Model, Core, Socket#Intel® Xeon® Platinum 8180, 56, 2S
CPU/GPU TFLOPS(FP32)8.24T = 2.3G*56*64(AVX512)
CPU ConfigTurbo on, HT on, NUMA on
RAM Bandwidth255GB/s = 2.66*12*8(2666MHz DDR4)
RAM Capacity192G = 16G*12*1

System Info

Platform

Linux* 3.10.0-862.6.3.el7.x86_64-x86_64-with-centos-7.4.1708-Core

Kernel3.10.0-862.6.3.el7.x86_64

BIOS Vendor

Intel Corporation

BIOS Version

SE5C620.86B.0X.01.0117.021220182317

Important Official Pages

Reference


Notices and Disclaimers

Performance results are based on testing as of July 2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications, and roadmaps.

The benchmark results may need to be revised as additional testing is conducted. The results depend on the specific platform configurations and workloads utilized in the testing, and may not be applicable to any particular user’s components, computer system or workloads. The results are not necessarily representative of other benchmarks and other benchmark results may show greater or lesser impact from mitigations.

Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.
Other names and brands may be claimed as the property of others.
© Intel Corporation.

For more complete information about compiler optimizations, see our Optimization Notice.