Intel® Trace Analyzer and Collector Support

Frequently Asked Questions

What file and directory permissions are required to use Intel® Trace Analyzer and Collector?
You do not need to install special drivers, kernels, or acquire extra permissions. Simply install the Intel Trace Analyzer and Collector in the $HOME directory and link it with your application of choice from there.
Should I recompile or relink my application to collect information?
It depends on your application. For Windows*, you have to relink your application by using the –trace link-time flag.

For Linux* (and if your application is dynamically linked), you do not need to relink or recompile. Simply use the –trace option at runtime (for example: mpirun –trace).
How do I control which part of my application should be profiled?
The Intel® Trace Collector provides several options to control the data collection. By default, only information about MPI calls is collected. If you'd like to filter which MPI calls should be traced, create a configuration file and set the VT_CONFIG environment variable.

If you'd like to expand the information collected beyond MPI and include all user-level routines, recompile your application with the –tcollect switch available as part of the Intel® compilers. In this case, Intel Trace Collector will gather information about all routines in the application, not just MPI. You can similarly filter this via the –tcollect-filter compiler option.

If you'd like to be explicit about which parts of the code should be profiled, use the Intel Trace Collector API calls. You can manually turn tracing on and off via a quick API call.

For more information on all of these methods, refer to the Intel Trace Collector Reference Guide accessible from the Documentation page.
What file format is the trace data collected in?
Intel® Trace Collector stores all collected data in Structured Tracefile Format (STF) which allows for better scalability across both time and processes. For more details, refer to the "Structured Tracefile Format" section of the Intel Trace Collector Reference Guide, accessible from the Documentation page.
Can I import or export trace data to and from Intel Trace Analyzer and Collector?
Yes, you can export the data from any of the profile charts (Function Profile, Message Profile, and Collective Operations Profile) as part of the Intel Trace Analyzer interface. To do this, open one of these profiles in the GUI, right-click the profile and then select Context > Export Data. The data will be saved in simple text format for easy reading.

At a separate level, you can save your current working Intel Trace Analyzer environment via the Project Menu. If you choose to save the project, your current open trace view and associated charts will be recorded as they are open on your screen. You can later choose to load the project from this same menu, which will bring up a previously-saved session.
How can I control the amount of data collected to a reasonable amount? What is a reasonable amount?
Each application is different in terms of the profiling data it can provide. The longer an application runs, and the more MPI calls it makes, the larger the .stf files will be. You can filter some of the unnecessary information out by applying appropriate filters (check out some tips on Intel Trace Collector Filtering).

Additionally, you can be restricted by the resources allocated to your account; consult your cluster administration about quotas and recommendations.
How can I analyze the collected information?
Once you have collected the trace data, you can analyze it via the graphical interface called the Intel® Trace Analyzer. Simply call the command ($ traceanalyzer) or double-click Intel Trace Analyzer, and then use the File menu to navigate to your .stf files.

You can get started by opening up the Event Timeline chart (under the Charts Menu) and zooming in at an appropriate level.

Check out the Detecting and Removing Unnecessary Serialization tutorial for ideas on how to get started. For complete details on functionality, refer to the Intel Trace Analyzer Reference Guide.
Why would I use the Application Performance Snapshot (APS) and not the Intel Trace Analyzer and Collector for MPI profiling?
They are complimentary. The APS was designed as a quick and scalable profiler to provide an experienced or new developer a fast and lightweight way to understand how the MPI application is running and if there are opportunities for further tuning and analysis. Once key metrics are understood, and based on the recommendations given by the APS, a developer can explore deeper using the Intel Trace Analyzer and Collector (or Intel® VTune™ Amplifier) if required.
What metrics are captured by APS?
Hardware counters and memory use data such as:
  • Wall clock (minimum and maximum values with ranks and average values)
  • Hardware counters (floating-point operations per second [FLOP] %FP instructions, % vector instructions, and % memory access instructions)
  • Memory usage statistics (minimum and maximum values with ranks and average values)
  • MPI imbalance
  • OpenMP* statistics (regions and imbalance)
What is the overhead of APS?
Since this is meant to run on production codes, there will be approximately a five percent impact on the performance of the MPI application.
What is the scalability of APS?
APS is currently tested to 32 000 ranks with plans to increase the scalability for future releases.
I am worried about the size of the captured data and what storage I will need for APS. On average, what is the size of the captured data for APS?
For a 1K rank profiling run, approximately 0.8 GB trace data is captured. This is 20 times smaller than the trace data for a comparable run using the Intel Trace Analyzer and Collector.
I am familiar with Allinea Performance Reports* (APR). How does APS compare to APR?
APS shared similarities with APR as they are both lightweight MPI profilers. As APR compliments Allinea Forge* and Allinea DDT*, APS compliments Intel Trace Analyzer and Collector and Intel® VTune™ Amplifier. Allinea* is a premier provider of distributed debugging solutions and part of the IA ecosystem of tool providers.