Intel® oneAPI Math Kernel Library (oneMKL) Essentials

Learn how to create performant applications and speed up computations with low-level math routines using the oneAPI programming model.

Start Learning oneMKL

Get hands-on practice with code samples in Jupyter Notebooks running live on Intel® DevCloud.

Intel® DevCloud Sign Up  Sign In

To get started:

  1. Sign in to Intel DevCloud, select One Click Log In for JupyterLab, and then select Launch Server (if needed).
  2. Open the Intel_oneAPI_MKL_Training folder, and then select oneMKL_Intro.ipynb.

Introduction to JupyterLab* and Notebooks

Use Jupyter Notebooks to modify, compile, and run code as part of the learning exercises.

Note If you are already familiar with Jupyter Notebooks, you may skip this module.

To begin, open Introduction_to_Jupyter.ipnyb.

GEMM: Use DPC++ and Buffer Model

  • Implement a GEMM matrix multiplication application with the buffer and accessor style of memory management.
  • Successfully compile and run the GEMM application using DPC++.

GEMM: Use DPC++ Unified Shared Memory (USM)

  • Set up the DPC++ components necessary to run the oneMKL GEMM operation using a unified shared memory model with implicit memory management.
  • Successfully compile and run the GEMM application using DPC++.

GEMM: Use OpenMP* Offload

  • Implement a oneMKL GEMM application using OpenMP* Offload.
  • Learn the compiler directives needed to manage memory, dispatch oneMKL functions, and then select the offload devices using OpenMP for the GEMM operation.
  • Compile and run the GEMM application using the Intel® Compiler with OpenMP* Offload support, and then verify the results of the offloaded task.