oneContainer Portal

Search for optimized containers and solutions from Intel. Get production-quality Docker* containers designed to meet your specific needs for HPC, AI, machine learning, IoT, media, rendering, and more. The oneContainer portal also includes Kubernetes* packages, Helm* charts, AI models, and pipelines.

Intel® oneAPI

oneAPI is an open, unified programming model built on standards to simplify development and deployment of data-centric workloads across CPUs, GPUs, FPGAs and other accelerators. Intel has made all of its Intel® oneAPI toolkits available through Docker containers.

Intel® oneAPI AI Analytics Toolkit Container

Speed up AI development with tools for deep learning training, inference, and data analytics.

Intel® oneAPI Base Toolkit Container

Code high-performance, data-centric applications across diverse architectures.

Intel® oneAPI DL Framework Developer Toolkit Container

Design and build your own framework for use on high-performance applications.

Intel® oneAPI HPC Toolkit Container

Develop, analyze, optimize, and scale HPC applications with the latest techniques.

Intel® oneAPI IoT Toolkit Container

Build high-performing, efficient, reliable solutions that run at the network’s edge.

Intel® oneAPI Runtime Libraries

Access runtime versions of the oneAPI libraries for deployment with your applications. Includes 10+ libraries.

AI Solutions

Compute-intensive AI workloads perform inference and training tasks that often scale beyond a single processor to multiple machines. Intel's AI container solutions simplify this scaling for cloud and on-premise deployments.

  • Run your own custom model using the latest version of Intel® Optimization for TensorFlow*.
  • Scale workloads in a data center with our Kubernetes packages.
  • Run a model package on bare metal with containers optimized for Intel® Xeon® processor-based platforms.

View AI Solutions

Intel® Optimizations for TensorFlow* with Jupyter Notebook*, Open MPI*, and Horovod*

Use this container to scale machine learning training across multiple systems, with or without Kubernetes.
 

ResNet50v1.5 INT8 Inference TensorFlow* Container

Evaluate the performance of ResNet50v1.5 INT8 inference on a server with Intel® Xeon® processors.

Docker* Images for Open Visual Cloud

Open Visual Cloud (OVC) is a set of open-source software stacks for media, analytics, graphics, and immersive media. The stacks are optimized for cloud-native deployment and offer FFmpeg* and GStreamer* frameworks as part of the Docker images.

  • Optimized software with the ability to easily add hardware acceleration
  • Stacks are end-to-end tested and patched
  • Stacks use open recipes for non-Docker applications

View Open Visual Cloud Solutions

Open Visual Cloud Media Delivery: FFmpeg for Development Systems with Intel Xeon Processors Using Ubuntu* 18.04

This media creation and delivery container is based on the FFmpeg framework and includes codecs such as AAC*, Opus, Ogg*, Vorbis*, x264*, x265*, vp8/9*, av1*, and SVT-HEVC*.
 

Open Visual Cloud Media Analytics: GStreamer for Development Systems with Intel Xeon Processors Using Ubuntu* 18.04

This media analytics-focused container includes a media delivery GStreamer image, Intel® Distribution for OpenVINO™ inference engine, and video analytics plug-ins.

System Stacks for Linux* OS

The stacks help you prototype quickly while providing flexibility to customize your solutions. Intel provides production-ready reference architectures using open-source components.

  • Highly tuned and built for cloud-native environments
  • Software integrations between components already complete
  • Optimized for 2nd generation Intel® Xeon® Scalable Processors

View System Stack Solutions

High-Performance Computing (HPC) Reference Stack

Deploy HPC and AI workloads on the same system, while reducing the complexities associated with integrating software components for HPC workloads.

Deep Learning Reference Stack (DLRS) v8

The Deep Learning Reference Stack helps AI developers deliver the best experience on Intel® architecture. This stack reduces the complexity that is common with deep learning software components, provides flexibility for customized solutions, and enables you to quickly prototype and deploy deep learning workloads.