User Guide

Contents

collect.py Options

Depending on options specified, collect basic data, do markup, and collect refinement data. By default, execute all steps. For any step besides markup, you must specify an application argument.

Usage

advisor-python <APM>/collect.py
<project-dir>
[--options] -- <target> [target-options]
Replace
<APM>
with
$APM
on Linux* OS or
%APM%
on Windows* OS.

Options

The following table describes options that you can use with the
collect.py
script. The target application to analyze and application options, if any, must be preceded by two dashes and a space.
Option
Description
<project-dir>
Required. Specify the path to the
Intel® Advisor
project directory.
-h
--help
Show help message and exit.
-v
<verbose>
--verbose
<verbose>
Specify output verbosity level:
  • 1 - Show only error messages. This is the least verbose level.
  • 2 - Show warning and error messages.
  • 3 (default) - Show information, warning, and error messages.
  • 4 - Show debug, information, warning, and error messages. This is the most verbose level.
This option affects the console output and debug log, but does not affect logs and report results.
-c
{basic, refinement, full}
--collect
{basic, refinement, full}
Specify the type of data to collect for an application:
  • basic
    - Collect basic data (Hotspots and FLOPs).
  • refinement
    - Collect refined data (Dependencies) for marked loops only.
  • full
    (default) - Collect both basic data for application and refined data for marked loops.
--config
<config>
Specify a configuration file by absolute path or name. If you choose the latter, the model configuration directory is searched for the file first, then the current directory.
You can specify several configurations by using the option more than once.
--data-reuse-analysis
(default) |
--no-data-reuse-analysis
Estimate data reuse between offloaded regions.
Disabling can decrease analysis overhead.
--data-transfer
(default) |
--no-data-transfer
Enable data transfer analysis.
--dry-run
Show the
Intel® Advisor
CLI commands for
advisor
appropriate for the specified configuration. No actual collection is performed.
--enable-slm
Enable SLM modeling in the memory hierarchy model. Must be used both with
collect.py
and
analyze.py
.
--executable-of-interest
<executable-name>
Specify the executable process name to profile if it is not the same as the application to run. Use this option if you run your application via script or other binary.
Specify the
name
only, not the full path.
--jit
|
--no-profile-jit
(default)
Collect data for applications with DPC++, OpenMP* target, and OpenCL™ code on a base platform.
-m
[{all, generic, regions, omp, dpcpp, daal, tbb}]
--markup
[{all, generic, regions, omp, dpcpp, daal, tbb}]
Mark up loops after survey or other data collection. Use this option to limit the scope of further collections by selecting loops according to a provided parameter:
  • all
    - Get lists of loop IDs to pass as the option for further collections.
  • generic
    (default) - Mark up all regions and select the most profitable ones.
  • regions
    - Select already existing parallel regions.
  • omp
    - Select outermost loops in OpenMP* regions.
  • dpcpp
    - Select outermost loops in Data Parallel C++ (DPC++) regions.
  • daal
    - Select outermost loops in
    Intel® oneAPI Data Analytics Library
    regions.
  • tbb
    - Select outermost loops in
    Intel® oneAPI Threading Building Blocks (oneTBB)
    regions.
omp
,
dpcpp
, or
generic
selects loops in the project so that the corresponding collection can be run without loop selection options.
You can specify several parameters in a comma-separated list. Loops are selected if they fit any of specified parameters.
--model-system-calls
(default) |
--no-model-system-calls
Analyze regions with system calls inside. The actual presence of system calls inside a region may reduce model accuracy.
--mpi-rank
<mpi-rank>
Specify a MPI rank to mark up results for analysis if multiple ranks are analyzed.
--no-cache-sources
Enable keeping source code cache within a project.
--no-cachesim
Disable cache simulation during collection. The model assumes 100% hit rate for cache.
Usage decreases analysis overhead.
--no-stacks
Run data collection without collecting data distribution over stacks. You can use this option to reduce overhead at the potential expense of accuracy.
-o
<output-dir>
--out-dir
<output-dir>
Specify the directory to put all generated files into. By default, results are saved in
<advisor-project>/perf_models/mNNNN
. If you specify an existing directory or absolute path, results are saved in this directory. The new directory is created if it does not exist. If you only specify the directory
<name>
, results are stored in
<advisor-project>/perf_models/<name>
.
-p
<output-name-prefix>
--out-name-prefix
<output-name-prefix>
Specify a string to be prepended to output result filenames.
--track-heap-objects
(default) |
--no-track-heap-objects
Deprecated. Use
--track-memory-objects
.
Attribute heap-allocated objects to the analyzed loops that accessed these objects. Enabling can increase collection overhead.
--track-memory-objects
(default) |
--no-track-memory-objects
Attribute heap-allocated objects to the analyzed loops that accessed the objects.
Disable to decrease analysis overhead.
--track-stack-accesses
(default) |
--no-track-stack-accesses
Track accesses to stack memory.

Examples

  • Collect full data on
    myApplication
    with default configuration and save the project to the
    ./advi
    directory.
    advisor-python $APM/collect.py ./advi -- myApplication
  • Collect refinement data for OpenMP* and DPC++ loops on
    myApplication
    with a custom configuration file
    config.toml
    and save the project to the
    ./advi
    directory.
    advisor-python $APM/collect.py ./advi --collect refinement --markup [omp,dpcpp] --config ./config.toml -- myApplication
  • Get commands appropriate for a custom configuration specified in the
    config.toml
    file to collect data separately with
    advisor
    . The commands are ready to copy and paste.
    advisor-python $APM/collect.py ./advi --dry-run --config ./config.toml

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.