User Guide

Contents

run_oa.py Options

Collect basic data, do markup, and collect refinement data. Then proceed to run analysis on profiling data. This script combines the separate scripts collect.py and analyze.py.

Usage

advisor-python <APM>/run_oa.py
<project-dir>
[--options] -- <target> [target-options]
Replace
<APM>
with
$APM
on Linux* OS or
%APM%
on Windows* OS.

Options

The following table describes options that you can use with the
run_oa.py
script. The target application to analyze and application options, if any, must be preceded by two dashes and a space and placed at the end of a command.
Option
Description
<project-dir>
Required. Specify the path to the
Intel® Advisor
project directory.
-h
--help
Show help message and exit.
-v
<verbose>
--verbose
<verbose>
Specify output verbosity level:
  • 1 - Show only error messages. This is the least verbose level.
  • 2 - Show warning and error messages.
  • 3 (default) - Show information, warning, and error messages.
  • 4 - Show debug, information, warning, and error messages. This is the most verbose level.
This option affects the console output and debug log, but does not affect logs and report results.
-c
{basic, refinement, full}
--collect
{basic, refinement, full}
Specify the type of data to collect for the application:
  • basic
    - Collect basic data (Hotspots and FLOPs).
  • refinement
    - Collect refined data (Dependencies) for marked loops only.
  • full
    (default) - Collect both basic data for application and refined data for marked loops.
--config
<config>
Specify a configuration file by absolute path or name. If you choose the latter, the model configuration directory is searched for the file first, then the current directory.
You can specify several configurations by using the option more than once.
--data-reuse-analysis
(default) |
--no-data-reuse-analysis
Estimate data reuse between offloaded regions.
Disabling can decrease analysis overhead.
--data-transfer
(default) |
--no-data-transfer
Enable data transfer analysis.
Disabling can decrease analysis overhead.
--dry-run
Show the
Intel® Advisor
CLI commands for
advisor
appropriate for the specified configuration. No actual collection is performed.
--enable-slm
Enable SLM modeling in the memory hierarchy model. Use both with
collect.py
and
analyze.py
.
--executable-of-interest
<executable-name>
Specify an executable process name to profile if it is not the same as the application to run. Use this option if you run your application via script or other binary.
Specify the
name
only, not the full path.
--jit
|
--no-profile-jit
Enable data collection and analysis for applications with DPC++, OpenMP* target, and OpenCL™ code on a base platform.
-m
[{all, generic, regions, omp, dpcpp, daal, tbb}]
--markup
[{all, generic, regions, omp, dpcpp, daal, tbb}]
Mark up loops after survey or other data collection. Use this option to limit the scope of further collections by selecting loops according to a provided parameter:
  • all
    - Get lists of loop IDs to pass as the option for further collections.
  • generic
    (default) - Mark up all regions and select the most profitable ones.
  • regions
    - Select already existing parallel regions.
  • omp
    - Select outermost loops in OpenMP* regions.
  • dpcpp
    - Select outermost loops in Data Parallel C++ (DPC++) regions.
  • daal
    - Select outermost loops in
    Intel® oneAPI Data Analytics Library
    regions.
  • tbb
    - Select outermost loops in
    Intel® oneAPI Threading Building Blocks (oneTBB)
    regions.
omp
,
dpcpp
, or
generic
selects loops in the project so that the corresponding collection can be run without loop selection options.
You can specify several parameters in a comma-separated list. Loops are selected if they fit any of specified parameters.
--model-system-calls
(default) |
--no-model-system-calls
Analyze regions with system calls inside. The actual presence of system calls inside a region may reduce model accuracy.
--mpi-rank
<mpi-rank>
Specify a MPI rank to mark up results for analysis if multiple ranks are analyzed.
--no-cache-sources
Enable keeping source code cache within a project.
--no-cachesim
Disable cache simulation during collection. The model assumes 100% hit rate for cache.
Usage decreases analysis overhead.
--no-stacks
Run data collection without collecting data distribution over stacks. You can use this option to reduce overhead at the potential expense of accuracy.
-o
<output-dir>
--out-dir
<output-dir>
Specify the directory to put all generated files into. By default, results are saved in
<advisor-project>/perf_models/mNNNN
. If you specify an existing directory or absolute path, results are saved in this directory. The new directory is created if it does not exist. If you only specify the directory
<name>
, results are stored in
<advisor-project>/perf_models/<name>
.
-p
<output-name-prefix>
--out-name-prefix
<output-name-prefix>
Specify a string to be prepended to output result filenames.
--track-heap-objects
(default) |
--no-track-heap-objects
Attribute heap-allocated objects to the analyzed loops that accessed these objects. Enabling can increase collection overhead.
--track-memory-objects
(default) |
--no-track-memory-objects
Attribute heap-allocated objects to the analyzed loops that accessed the objects.
Disabling can decrease analysis overhead.
--track-stack-accesses
(default) |
--no-track-stack-accesses
Track accesses to stack memory.

Examples

  • Collect full data on
    myApplication
    , run analysis with default configuration, and save the project to the
    ./advi
    directory. The generated output is saved to the default
    advi/perfmodels/mNNNN
    directory.
    advisor-python $APM/run_oa.py ./advi -- myApplication
  • Collect full data on
    myApplication
    , run analysis with default configuration, save the project to the
    ./advi
    directory, and save the generated output to the
    advi/perf_models/report
    directory.
    advisor-python $APM/run_oa.py ./advi --out-dir report -- myApplication
  • Collect refinement data for DPC++ code regions on
    myApplication
    , run analysis with a custom configuration file
    config.toml
    , and save the project to the
    ./advi
    directory. The generated output is saved to the default
    advi/perf_models/mNNNN
    directory.
    advisor-python $APM/run_oa.py ./advi --collect refinement --markup dpcpp --config ./config.toml -- myApplication

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.