Popular Deep Learning Frameworks


Deep learning frameworks provide data scientists, developers, and researchers a high-level programming language to architect, train, and validate deep neural networks.

 

TensorFlow*  |  PyTorch*  |  Apache MXNet*  |  BigDL  |  Caffe*

TensorFlow*

Based on Python*, this deep learning framework is designed for flexible implementation and extensibility on modern deep neural networks. In collaboration with Google*, TensorFlow has been directly optimized for Intel® architecture to achieve high performance on Intel® Xeon® Scalable processors.

Learn More

Downloads

Containers Binaries Source Code


PyTorch*

This Python package provides one of the fastest implementations of dynamic neural networks to achieve speed and flexibility. In collaboration with Facebook*, this popular framework is now combined with many Intel® optimizations to provide superior performance on Intel architecture, most notably Intel Xeon Scalable processors.

Downloads

Binaries Source Code

Documentation

Get Started Guide


PaddlePaddle*

This open source deep learning Python* framework from Baidu is known for user-friendly, scalable operations. Built using Intel® Math Kernel Library for Deep Neural Networks, this popular framework provides fast performance on Intel Xeon Scalable processors as well as a large collection of tools to help AI developers.

Downloads

Source Code

Documentation

Get Started Guide




Caffe*

Created by the Berkeley Vision and Learning Center (BVLC) and community contributors, the Intel® Optimization for Caffe* is a fork maintained by Intel that is optimized for Intel architectures. This optimized branch of Caffe is one of the most popular frameworks for image recognition with improved performance on Intel Xeon Scalable processors.

Learn More

Downloads

Containers Binaries Source Code

Explore the Intel® DevCloud

With built-in AI acceleration, this powerful cluster of the latest Intel Xeon Scalable processors provides the right environment for your experiments with code and debugging.