Use these courses to get up to speed on oneAPI Data Parallel C++ (DPC++) code and how to use oneAPI Toolkits and components to achieve cross-platform, heterogenous compute.
|Introducing oneAPI: A Unified, Cross-Architecture Performance Programming Model||Mandatory|
Intel® DevCloud Tutorial
|Migrate Your Existing CUDA Code to DPC++ Code||Mandatory|
|DPC++ Program Structures||Mandatory|
|DPC++ New Features||Mandatory|
|Develop in a Heterogeneous Environment with Intel® oneAPI Math Kernel Library||Optional|
|Intel® oneAPI Threading Building Blocks: Optimizing for NUMA Architectures||Optional|
|Customize Your Workloads with FPGAs||Optional|
The drive for compute innovation is as old as computing itself, with each advancement built upon what came before. In 2019 and 2020, a primary focus of next-gen compute innovation has been to enable increasingly complex workloads to run on multiple architectures, including CPUs, GPUs, FPGAs, and AI accelerators.
Historically, writing and deploying code for a CPU and a GPU or other accelerator has required separate code bases, libraries, languages, and tools. oneAPI was created to solve this challenge.
Kent Moffat, software specialist and Intel senior product manager, presents:
Develop, run, and optimize your Intel® oneAPI solution in the Intel® DevCloud—a free development sandbox to learn about and program oneAPI cross-architecture applications. Get full access to the latest Intel CPUs, GPUs, and FPGAs, Intel® oneAPI Toolkits, and the new programming language, Data Parallel C++ (DPC++).
Some of the lessons and training materials use the Intel DevCloud as a platform to host the training and to practice what you've learned.
In this video, Intel senior software engineers, Sunny Gogar and Edward Mascarenhas, show you how to use the Intel DPC++ Compatibility Tool to perform a one-time migration that ports both kernels and API calls. In addition, you will learn the following:
This module introduces DPC++ program structure and focuses on important SYCL* classes to write basic DPC++ code to offload to accelerator devices.
This module introduces some of the new extensions added to DPC++ like Unified Shared Memory (USM), in-order queues, and Sub-Groups. This module will be updated when new extensions are added to the public releases.
Peter Caday, math algorithm engineer at Intel, discusses how oneMKL enables developers to program with GPUs beyond the traditional CPU-only support.
Threading Building Blocks (TBB) is a high-level C++ template library for parallel programming that was originally developed as a composable, scalable solution for multicore platforms. Separately, in the realm of high-performance computing, multisocket Non-Uniform Memory Access (NUMA) systems are typically used with OpenMP*.
Increasingly, many independent software components require parallelism within a single application, especially in AI and video processing and rendering domains. In such environments, performance may degrade without allowing for composability with other components.
The result is that many developers have pulled TBB into NUMA environments—a complex task for even the most seasoned programmers.
Intel is working to simplify the approach. This training:
This course teaches you how to configure FPGAs into custom solutions to speed up key workloads using Intel oneAPI Toolkits. At the end of this course, you will be able to:
For this course, please contact your Intel® representative to schedule instructor-led training.
Los compiladores Intel pueden o no optimizar al mismo nivel para los microprocesadores que no son Intel en optimizaciones que no son exclusivas de los microprocesadores Intel. Estas optimizaciones incluyen los conjuntos de instrucciones SSE2, SSE3 y SSSE3, y otras optimizaciones. Intel no garantiza la disponibilidad, funcionalidad o eficacia de ninguna optimización en microprocesadores que no sean fabricados por Intel. Las optimizaciones dependientes del microprocesador en este producto fueron diseñadas para usarse con microprocesadores Intel. Ciertas optimizaciones no específicas de la microarquitectura Intel se reservan para los microprocesadores Intel. Consulte las guías de referencia y para el usuario para obtener más información acerca de los conjuntos de instrucciones específicos cubiertos por este aviso.
Revisión del aviso n.° 20110804