User Guide

Contents

Run
Offload Modeling
Perspective from GUI

Prerequisites:
  • For GPU-enabled applications (Data Parallel C++, OpenMP* target, OpenCL™): Set up environment variables to offload temporarily your application to a CPU for the analysis.
  • In the graphical-user interface (GUI): Create a project and specify an analysis target and target options.
To configure and run the
Offload Modeling
perspective from the GUI:
  1. Configure the perspective and set analysis properties, depending on desired results:
    • Select a collection accuracy level with analysis properties preset for a specific result:
      • Low
        : Model your application performance for a target device and get the basic information about potential speed-up and performance.
      • Medium
        : Model your application performance and data transfers between host and target devices.
      • High
        : Model your application performance and data transfers and analyze dependencies to improve offload modeling accuracy.
    • Select the analyses and properties manually to adjust the perspective flow to your needs. The accuracy level is set to
      Custom
      .
    The higher accuracy value you choose, the higher runtime overhead is added to your application. The
    Overhead
    indicator shows the overhead for the selected configuration. For the
    Custom
    accuracy, the overhead is calculated automatically for the selected analyses and properties.
    By default, accuracy is set to
    Low
    . See Offload Modeling Accuracy Presets for more details.
  2. If you want to check the offload profitability for specific loops/functions instead of analyzing the whole application:
    1. Go to
      Project Properties
      Performance Modeling
      .
    2. Enter the
      --select
      =
      [(r|recursive):]<id>|<file>:<line>|<criteria>[,<id>|<file>:<line>|<criteria>,..]
      in the
      Other parameters
      field.
  3. Select a target platform from the drop-down.
  4. Click the button to run the perspective.
    While the perspective is running, you can do the following in the
    Analysis Workflow
    tab:
    • Control the perspective execution:
      • Stop data collection and see the already collected data: Click the button.
      • Pause data collection: Click the button.
      • Cancel data collection and discard the collected data: Click the button.
    • Expand an analysis with to control the analysis execution:
      • Pause analysis and see the already collected data: Click the button.
      • Stop analysis and start the next analysis selected: Click the button.
      • Interrupt execution of all selected analyses and see the already collected data: Click the button.
    After you run the
    Offload Modeling
    perspective, the collected Survey data becomes available for all other perspectives. If you switch to another perspective, you can skip the Survey step and run only perspective-specific analyses.
To run the Offload Modeling perspective with the Medium accuracy from the command line interface:
  1. Run the Survey analysis:
    advisor --collect=survey --project-dir=./advi --stackwalk-mode=online --static-instruction-mix -- myApplication
  2. Collect Trip Counts and FLOP data:
    advisor --collect=tripcounts --project-dir=./advi --flop --stacks --enable-cache-simulation --data-transfer=light --target-device=gen11_icl -- myApplication
  3. Run Performance Modeling:
    advisor --collect=projection --project-dir=./advi --no-assume-dependencies
To generate command lines for selected perspective configuration, click the
Command Line
button.
Once the
Offload Modeling
perspective collects data, the report opens showing a
Summary
tab with performance metrics estimated for the selected target platform, such as estimated speedup, potential performance bottlenecks, and top offloaded loops. Depending on the selected accuracy level and perspective properties, continue to investigate the results:

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.