Intel® Distribution of Modin*

Scale your pandas workflows by changing a single line of code.

Top Benefits:

  • Drop-in acceleration to your existing pandas workflows
  • No upfront cost to learning a new API
  • Integrates with the Python* ecosystem
  • Seamlesly scales across multicores with Ray* and Dask* clusters (run on and with what you have)


A Single-Line Code Change Provides for Infinite Scalability
Use import modin.pandas as pd to use the Pandas API with your data analysis workload—no matter the scale.

Use the Performant OmniSci* Back End to Boost Performance

The Intel Distribution of Modin back end is supported by OmniSci, a performant framework for end-to-end analytics and boosts performance by harnessing the compute power of existing and emerging Intel hardware.

Use pandas across All Available Cores, Not Just One
Using Dask and Ray, Intel Distribution of Modin transparently distributes the data and computation across available cores, unlike pandas, which only uses one core at a time.

Process Terabytes of Data on a Single Workstation
Enabled by Intel® Optane™ memory, a single workstation with Intel Distribution of Modin has the memory capacity and performance of Apache Spark* platform services on 20-node Amazon Web Services (AWS)*. You can also easily scale your workload to the cloud or a high-performance computing environment as needed.