Chainer* is a Python*-based deep learning framework aiming at flexibility and intuition. It provides automatic differentiation APIs based on the define-by-run approach (a.k.a. dynamic computational graphs) as well as object-oriented high-level APIs to build and train neural networks. It supports various network architectures including feed-forward nets, convnets, recurrent nets and recursive nets. It also supports per-batch architectures. Forward computation can include any control flow statements of Python without lacking the ability of backpropagation. It makes code intuitive and easy to debug. Intel® optimization for Chainer, is currently integrated with the latest release of Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) 2017 optimized for Intel® Advanced Vector Extensions 2 (Intel® AVX) and Intel® Advanced Vector Extensions 512 (Intel®AVX-512) instructions which are supported in Intel® Xeon® and Intel® Xeon Phi™ processors.
We recommend these Linux* distributions.
The following versions of Python can be used:
Above recommended environments are tested. We cannot guarantee that Intel® optimization for Chainer works on other environments including Windows* and macOS*, even if Intel optimization for Chainer looks to be running correctly.
Before installing Intel Optimization for Chainer, we recommend to upgrade setuptools if you are using an old one:
$ pip install -U setuptools
The following packages are required to install IntelChainer.
Image dataset support
HDF5 serialization support
You can use setup.py to install Intel optimization for Chainer from the tarball:
$ python setup.py install
Use pip to uninstall Chainer:
$ pip uninstall chainer
We are providing the Docker image and Dockerfile for Ubuntu and Centos based on python2 and python3, respectively. For details see: How to build and run Intel optimization for Chainer Docker image.
Training test with mnist dataset:
$ cd examples/mnist $ python train_mnist.py -g -1
Training test with cifar datasets:
$ cd examples/cifar $ python train_cifar.py –g -1 --dataset='cifar100'
$ cd examples/cifar $ python train_cifar.py –g -1 --dataset='cifar10'
For Single Node Performance Test Configurations, please refer to following wiki:
MIT License (see LICENSE file).
Tokui, S., Oono, K., Hido, S. and Clayton, J., Chainer: a Next-Generation Open Source Framework for Deep Learning, Proceedings of Workshop on Machine Learning Systems(LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS), (2015) URL, BibTex
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804