User Guide

Contents

Run
Vectorization and Code Insights
Perspective from Command Line

Vectorization and Code Insights
perspective includes several analyses that you can run depending on the desired result. The main analysis is the Survey, which collects performance data for loops and functions in your application and identifies under-vectorized and non-vectorized loops/functions. The Survey analysis is enough to get the basic insights about your application performance.

Prerequisites

Set
Intel Advisor
environment variables
with an automated script. The script enables the
advisor
command line interface (CLI).
In the commands below, the options in square brackets (
[--
<option>
]
) are recommended if you want to change what data is collected.

Run
Vectorization and Code Insights
Perspective

  1. Run the Survey analysis:
    advisor --collect=survey --project-dir=
    <project-dir>
    --
    <target-application>
    [
    <target-options>
    ]
    Survey analysis collects useful data about your application performance and loop/function vectorization. Explore the Survey results to understand if you need to run other analyses.
  2. Run the Characterization analysis to collect trip counts and/or FLOP data:
    advisor --collect=tripcounts [--flop] [--stacks] [--enable-data-transfer-analysis] --project-dir=
    <project-dir>
    --
    <target-application>
    [
    <target-options>
    ]
    where:
    • --flop
      is an option to collect data about floating-point and integer operations, memory traffic, and mask utilization metrics for AVX-512 platforms.
    • --stacks
      is an option to enable advanced collection of callstack data.
    • --enable-cache-simulation
      is an option to enable modeling cache behavior.
  3. Optional
    : Mark up loops for the next analyses to decrease overhead:
    advisor --mark-up-loops --project-dir=
    <project-dir>
    --select=
    <criteria>
    --
    <target-application>
    [
    <target-options>
    ]
    where
    --select=
    <string>
    is an option to select loops for the analysis by loop IDs, source locations, criteria such as
    scalar
    ,
    has-issue
    , or
    markup=
    <markup-mode>
    .
    For details about markup, see Loop Markup to Minimize Analysis Overhead.
    Run this command if you want to run the Memory Access Patterns and Dependencies analyses for the
    same
    set of loops. Otherwise, you can skip this step and use the
    --select
    option in the analysis-specific commands.
  4. Optional
    : Run the Memory Access Patterns analysis to collect memory traffic data and memory usage issues that can slow down loops vectorization:
    advisor --collect=map --project-dir=
    <project-dir>
    [--select=
    <criteria>
    ] [--enable-cache-simulation] --
    <target-application>
    [
    <target-options>
    ]
    where:
    • --select=
      <string>
      is an option to select loops for the analysis by loop IDs, source locations, criteria such as
      scalar
      ,
      has-issue
      , or
      markup=
      <markup-mode>
      . For example, use the
      --select=has-issue
      to analyze loops that have
      Possible Inefficient Memory Access Pattern
      issue. Use this option if you did not run the
      --mark-up-loops
      command or want to analyze other loops.
    • --enable-cache-simulation
      is an option to enable modeling more accurate memory footprints, cache miss information, and cache line utilization.
  5. Optional
    : Run the Dependencies analysis to check for loop-carried dependencies that may prevent vectorizing the code:
    advisor --collect=dependencies --project-dir=
    <project-dir>
    [--select=
    <string>
    ] [--filter-reductions] --
    <target-application>
    [
    <target-options>
    ]
    where:
    • --select=
      <string>
      is an option to select loops for the analysis by loop IDs, source locations, criteria such as
      scalar
      ,
      has-issue
      , or
      markup=
      <markup-mode>
      . For example, use the
      --select=has-issue
      to analyze loops that have
      Vector Dependence Prevent Vectorization
      issue. Use this option if you did not run the
      --mark-up-loops
      command or want to analyze other loops.
    • --filter-reductions
      is an option to mark all potential reductions with a specific diagnostic.
Example
Run the Survey analysis, Characterization analysis to collect trip count and FLOP metrics, and analyze memory access patterns for loops/functions with the
Possible Inefficient Memory Access Patter
issue.
advisor --collect=survey --project-dir=./advi -- myApplication
advisor --collect=tripcounts --project-dir=./advi --flop --stacks --enable-data-transfer-analysis -- myApplication
advisor --collect=map --project-dir=./advi --select=has-issue -– myApplication

View the Results

Intel Advisor
provides several ways to view the
Vectorization and Code Insights
results.
View Result in CLI
You can print the results collected in the CLI and save them to a
.txt
,
.csv
, or
.xml
file.
Run the following command:
advisor --report=
<analysis-type>
--project-dir=
<project-dir>
--format=
<format>
where:
  • <analysis-type>
    is the analysis you want to generate the results for. For example,
    survey
    for the Survey report,
    top-down
    for the Survey report in a top-down view,
    map
    for the Memory Access Patterns, or
    dependencies
    for the Dependencies report.
  • --format=
    <format>
    is a file format to save the results to.
    <format>
    is
    text
    (default),
    csv
    ,
    xml
    .
For example, to generate the Survey report:
advisor --report=survey --project-dir=./advi
You should see a similar result:
ID Function Call Sites Total Self Type Why No Vectorization Vector ISA Compiler Average Min Max Call Count Transformations Source Location Module and Loops Time Time Estimated Gain Trip Count Trip Count Trip Count __________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ 14 [loop in main at mmult_serial.cpp:79] 0.495s 0.495s Vectorized Versions 1 vectorization possible but seems inefficient... SSE2 <2.42x 127; 127; 1; 7 127; 127; 1; 7 128; 128; 1; 7 524252; 524324; 530432; 530432 Interchanged; Unrolled mmult_serial.cpp:79 1_mmult_serial.exe 6 -[loop in main at mmult_serial.cpp:79] 0.275s 0.275s Vectorized (Body) SSE2 2.42x 127 127 128 524252 Unrolled; Interchanged mmult_serial.cpp:79 1_mmult_serial.exe 3 -[loop in main at mmult_serial.cpp:79] 0.205s 0.205s Vectorized (Body) SSE2 2.42x 127 127 128 524324 Unrolled; Interchanged mmult_serial.cpp:79 1_mmult_serial.exe 7 -[loop in main at mmult_serial.cpp:79] 0.015s 0.015s Peeled 1 1 1 530432 Interchanged mmult_serial.cpp:79 1_mmult_serial.exe 11 -[loop in main at mmult_serial.cpp:79] 0s 0s Remainder vectorization possible but seems inefficient... 7 7 7 530432 Interchanged mmult_serial.cpp:79 1_mmult_serial.exe 4 [loop in main at mmult_serial.cpp:79] 0.510s 0.015s Scalar inner loop was already vectorized 1024 1024 1024 1024 Interchanged mmult_serial.cpp:79 1_mmult_serial.exe 12 [loop in main at mmult_serial.cpp:79] 0.510s 0s Scalar Versions 1 inner loop was already vectorized 1024 1024 1024 1 mmult_serial.cpp:79 1_mmult_serial.exe 5 -[loop in main at mmult_serial.cpp:79] 0.510s 0s Scalar inner loop was already vectorized 1024 1024 1024 1 mmult_serial.cpp:79 1_mmult_serial.exe
The result is also saved into a text file
advisor-survey.txt
located at
<project-dir>
/e
NNN
/hs
NNN
.
You can also generate a report with the data from all analyses run and save it to a CSV file with the
--report=joined
action as follows:
advisor --report=joined --report-output=
<path-to-csv>
where
--report-output=<path-to-csv>
is a path and a name for a
.csv
file to save the report to. For example,
/home/report.csv
. This option is required to generate a joined report.
See CPU and Memory Metrics for more information about the metrics reported.
View Result in GUI
When you run
Intel Advisor
CLI, a project is created automatically in the directory specified with
--project-dir
. All the collected results and analysis configurations are stored in the
.advixeproj
project, that you can view in the
Intel Advisor
.
To open the project in GUI, you can run the following command:
advisor-gui <project-dir>
If the report does not open, click
Show Result
on the Welcome pane.
You first see a Vectorization Summary report that includes the overall information about vectorized and not vectorized loops/functions in your code and the vectorization efficiency, including:
  • Performance metrics of your program and the speedup for the vectorized loops/functions
  • Top five time-consuming loops and top five optimization recommendations with the highest confidence
Vectorization summary report
Save a Read-only Snapshot
A snapshot is a read-only copy of a project result, which you can view at any time using the
Intel Advisor
GUI. To save an active project result as a read-only snapshot:
advisor --snapshot --project-dir=
<project-dir>
[--cache-sources] [--cache-binaries] --
<snapshot-path>
where:
  • --cache-sources
    is an option to add application source code to the snapshot.
  • --cache-binaries
    is an option to add application binaries to the snapshot.
  • <snapshot-path
    is a path and a name for the snapshot. For example, if you specify
    /tmp/new_snapshot
    , a snapshot is saved in a
    tmp
    directory as
    new_snapshot.advixeexpz
    . You can skip this and save the snapshot to a current directory as
    snapshot
    XXX
    .advixeexpz
    .
To open the result snapshot in the
Intel Advisor
GUI, you can run the following command:
advisor-gui
<snapshot-path>
You can visually compare the saved snapshot against the current active result or other snapshot results.

Next Steps

Continue to Find Loops that Benefit from Better Vectorization to understand the results. For details about the metrics reported, see CPU and Memory Metrics.

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.