User Guide


HOW: Analysis Types

Intel® VTune™
provides a set of pre-configured analysis types you may start with to address your particular performance optimization goals.
When you create a project, the
opens the
Configure Analysis
window that prompts you to specify WHAT you want to analyze (an application, process, or a whole system), a system WHERE you plan to run the analysis, and select HOW you need to run the analysis.
Configure Analysis: Analysis Type
Click the header in the
pane to open an analysis tree. Select from an analysis type from one of these groups:
Performance Snapshot
  • Use Performance Snapshot to get an overview of issues that affect the performance of an application on your system. The analysis is a good starting point that recommends areas for deeper focus. You also get guidance on other analysis types to consider running next.
  • Use the Hotspots analysis type to investigate call paths and find where your code is spending the most time. Identify opportunities to tune your algorithms. See
    Finding Hotspots tutorial
    : Linux | Windows.
  • Use Anomaly Detection (preview) to identify performance anomalies in frequently recurring intervals of code like loop iterations. Perform fine-grained analysis at the microsecond level.
  • Memory Consumption is best for analyzing memory consumption by your app, its distinct memory objects, and their allocation stacks. This analysis is supported for Linux targets only.
  • Microarchitecture Exploration (formerly known as General Exploration) is best for identifying the CPU pipeline stage (front-end, back-end, and so on) and hardware units responsible for your hardware bottlenecks.
  • Memory Access is best for memory-bound apps to determine which level of the memory hierarchy is impacting your performance by reviewing CPU cache and main memory usage, including possible NUMA issues.
  • Threading is best for visualizing thread parallelism on available cores, locating causes of low concurrency, and identifying serial bottlenecks in your code.
  • Use HPC Performance Characterization to understand how your compute-intensive application is using the CPU, memory, and floating point unit (FPU) resources. See
    Analyzing an OpenMP* and MPI Application tutorial
    : Linux.
  • Input and Output analysis monitors utilization of the IO subsystems, CPU and processor buses.
  • GPU Offload (preview) is targeted for applications using a Graphics Processing Unit (GPU) for rendering, video processing, and computations. It helps you identify whether your application is CPU or GPU bound.
  • GPU Compute/Media Hotspots (preview) is targeted for GPU-bound applications and helps analyze GPU kernel execution per code line and identify performance issues caused by memory latency or inefficient kernel algorithms.
  • CPU/FPGA Interaction analysis explores FPGA utilization for each FPGA accelerator and identifies the most time-consuming FPGA computing tasks.
  • System Overview is a driverless event-based sampling analysis that monitors a general behavior of your target system and identify platform-level factors that limit performance.
  • Throttling analysis is useful to identify performance issues that result from the CPU operating at temperatures above thermal and power limits.
  • Platform Profiler analysis collects data on a deployed system running a full load over an extended period of time with insights into overall system configuration, performance, and behavior. The collection is run on a command prompt outside of
    and results are viewed in a web browser.
may or may not appear in a future production release. It is available for your use in the hopes that you will provide feedback on its usefulness and help determine its future. Data collected with a preview feature is not guaranteed to be backward compatible with future releases.
Advanced users can create a custom analysis using the data collectors provided by
, or combining the collector of
with another custom collector.

Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at