Essentials of Data Parallel C++

Learn the fundamentals of this language designed for data parallel and heterogeneous computing through hands-on practice in this guided learning path.

Start Learning DPC++

Get hands-on practice with code samples in Jupyter Notebooks running live on Intel DevCloud.

Intel® DevCloud   Sign Up   Sign In

To get started:

  1. Sign in to Intel DevCloud, select One Click Log In for JupyterLab, select Launch Server (if needed), and then from the launcher, select Terminal.
  2. At the command prompt, enter /data/oneapi_workshop/get_jupyter_notebooks.sh, and then press Enter.
  3. Open the oneAPI_Essentials folder, and then double-click 00_Introduction_to_Jupyter to open the folder.

Introduction to JupyterLab and Notebooks

Use Jupyter notebooks to modify and run code as part of learning exercises.

To begin, open Introduction_to_Jupyter.ipnyb.

Introduction to DPC++

  • Articulate how oneAPI can help to solve the challenges of programming in a heterogeneous world.
  • Use oneAPI solutions to enable your workflows.
  • Understand the DPC++ language and programming model.
  • Become familiar with using Jupyter notebooks for training throughout the course.

DPC++ Program Structure

  • Articulate the SYCL fundamental classes.
  • Use device selection to offload kernel workloads.
  • Decide when to use basic parallel kernels and ND Range Kernels.
  • Create a host accessor.
  • Build a sample DPC++ application through hands-on lab exercises.

DPC++ Unified Shared Memory

  • Use new DPC++ features like Unified Shared Memory (USM) to simplify programming.
  • Understand implicit and explicit ways of moving memory using USM.
  • Solve data dependency between kernel tasks in an optimal way.

DPC++ Sub-Groups

  • Understand advantages of using Sub-groups in DPC++
  • Take advantage of Sub-group collectives in ND-Range kernel implementation
  • Use Sub-group Shuffle operations to avoid explicit memory operations

Demonstration of Intel® Advisor

  • See how Offload Advisor¹ identifies and ranks parallelization opportunities for offload.
  • Run Offload Advisor using command line syntax.
  • Use performance models and analyze generated reports.

¹Offload Advisor is a feature of Intel Advisor⁽ᴮᵉᵗᵃ⁾ installed as part of the Intel oneAPI Base Toolkit.

Intel® VTune™ Profiler on Intel® DevCloud

  • Profile a DPC++ application using Intel® VTune™ Profiler⁽ᴮᵉᵗᵃ⁾ on Intel® DevCloud.
  • Understand the basics of command line options in Intel VTune Profiler to collect data and generate reports.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804