Get high performance Python at your fingertips with the free Intel® Distribution for Python. Intel released the Intel® Distribution for Python* in September 2016, and it has made a huge impact in advancing the performance of Python closer to native code. In this video, Sergey will highlight the many performance optimizations and enhancements that are included such as NumPy & SciPy performance optimizations with the Intel® Math Kernel Library, scikit-learn optimizations with Intel® Data Analytics Acceleration Library, NumPy memory optimizations and composable parallelism opportunities with TBB package. Learn how Intel contributes to the Python community by making these optimizations available through multiple channels.
- Intel® Distribution for Python* Home Page
- Intel® Distribution for Python* Home Page Benchmarks
- Intel® Distribution for Python* 2017 Update 2 Accelerates Five Key Areas for Impressive Performance Gains
- Intel® Distribution for Python* Forum
- Intel® Distribution for Python* Docker Hub
- Watch the rest of the videos in the Playlist
- Intel® Math Kernel Library Home Page
- Subscribe to the Intel® Software YouTube Channel
Hi. My name is Sergey Maidanov. In this video, we'll be talking about Python and how it can help accelerate technical computing and machine learning. I will also highlight some key features of Intel distribution for Python. Stay with me to learn more.
Python is known as a popular and powerful language used across various application domains. Being an interpreted language, it has inherited performance constraints limiting its usage to environments not very demanding for performance. Python's low efficiency in production environments creates an organizational challenge when companies and institutions need to have two distinct things. The one that prototypes in numerical model in Python and the other, that it writes it in a different language to deploy it in production.
Our team's mission at Intel is to bring Python performance up when a prototype numerical or machine learning model can be deployed in production without the need to rewrite it in a different programming language. Since our target customers [INAUDIBLE] with development productivity, we aim to build performance on Intel architecture out-of-the-box with relatively small effort on the user side.
Let me briefly outline what Intel Python is and how it brings performance efficiency. We deliver pre-built Python along with the most popular packages for numerical computing and data science, such as NumPy, SciPy, and Scikit-learn All are linked with Intel's performance libraries such as MKL and DAAL for near-to-native code speeds. Intel Python is also accompanied with productivity tools such as Jupyter notebooks and [INAUDIBLE]. It also shipped with Conda and PIP package managers that allow you to seamlessly install any other package available in the community.
For machine learning, our distribution comes with optimized deep software, Caffe and Theano, as well as classic machines learning libraries like, Scikit-learn and pyDAAL. We also package Cython and Numba for tuning performance hotspots to native speeds. And for [INAUDIBLE] performance, we ship MPI for Py accelerated with Intel MPI. Python distribution is available in a variety of options, so don't forget to follow the links below to access it.
Let me illustrate the out-of-the-box performance on the example of Black-Scholes formal application being run in prototype environment on Intel Core-based processor and in production on Intel Xeon and Xeon Phi servers. The bars show performance that we can attain with the stock NumPy, illustrated by the dark blue bars, and with NumPy shipped with Intel Python, represented by the light bars. You can see that Intel's NumPy delivers significantly better performance on Intel Core-based system.
But it scales on relatively small problem sizes shown on the horizontal axis as the total number of options to price. This is typical for prototype environment. You build and test your model on relatively small problem first, and then deploy in production to run it in full scale on powerful CPUs.
This graph shows how the same application scales in production on the Intel Xeon-based server. You can see that Intel Python delivers much better performance and scales really well to large problems. Next, this graph shows how the same application scales on Intel Xeon Phi-based system. You can see that Intel Python delivers even better performance on these highly parallel workload that scales well for large enough problems.
Besides, Intel Python engineering, we work with all major Python vendors and the open source community to make these optimizations broadly accessible. And we encourage you to take advantage of Intel Python's exceptional performance in your own numerical and machine learning projects. Every option to get Python is free for academic and commercial use, so don't forget to follow the links to access it. And thanks for watching.