‹ Back to Video Series: Recorded Training Series

Part 2: Offload Programming for Intel® Coprocessors

  • Overview

In episode 2 of the “Hands-On Workshop (HOW) series on parallel programming and optimization with Intel® architectures”, we focus on the usage of the Intel® Xeon Phi™ platform as a coprocessor in the offload programming model.

In this episode we talk about:

• The explicit offload model based on compiler pragmas
• Explain how to offload function
• Local scalars and arrays of known size
• How to do data marshalling for pointer-based arrays in C and C++

Additional topics include:

• Fall-back to host
• Using multiple coprocessors
• Retaining memory buffers and data on coprocessors between offloads
• Overlapping communication and computation with asynchronous offload
• Using environment variables for offload control

The Intel proprietary API called LEO (Language Extensions for Offload) is compared with the standard-based API for offload in OpenMP* 4.0. Also, the shared virtual memory model for offload is briefly introduced.

The hands-on part of the episode demonstrates how to port a simple application to the offload model with Intel Xeon Phi coprocessors.

EPISODES (21)
Next
Next