Find the FAQ for Your Software

One way to do this is to load your solution into Microsoft Visual Studio. In the Solution Explorer, right-click the solution name. Near the bottom of the menu, hover over Intel Parallel Studio XE, and then click Use Intel C++ Compiler.

Our compilers, libraries, and tools are generally not stand-alone products, but are integrated into the native environment. They can be used from within Microsoft Visual Studio (Windows), Eclipse C/C++ Development Tooling* (Linux), and Xcode* (macOS). You can also open a command prompt and invoke them using a command line. For more information, look for specific compiler, library, or tool instructions in the Getting Started Guide.

Profiling with Intel VTune Profiler requires no recompilation. However, we recommend that you profile an optimized build of your application (including symbols) to get the most complete and useful results. You may need to modify your release and build process by adding symbol information to the optimized build.

For source code to be visible, compile a release build with the debug information flag. For example, on Linux*, verify you are compiling with the -g flag.

You also need to set the location of your source files, binary files, and symbol files. To do this:

  1. Open or create a project, and then select Project Properties.
  2. In the Project properties box, select the Search Directories tab.
  3. Select the menu, select All files, and then specify the directory where your files exist. If you have any subdirectories, select the Search subdirectories check box.

On Linux, you only need root access to install the hardware collector driver. Once you install it, root access is no longer required. On Linux, depending upon the install options selected, you may need to be a member of the driver access group (the vtune group by default) to use the hardware collector. If the driver provided by Intel is not installed, Intel VTune Profiler will use the perf driver instead. Perf provides some (but not all) of the same features. For more information, see Sampling Drivers.

To import results into Intel VTune Profiler, first create a project:

 

  1. Select File > New > Project. A dialog box appears.
  2. Enter a project name and then select OK. The Project Properties dialog box appears. You do not need to specify the application name if you do not plan to collect additional data.
  3. To view the source of the imported results, specify where your source and binaries are located:
    a. Select the Search Directories tab.
    b. Select the menu, select All files, and then specify the directory where your files exist.
  4. To search recursively, select the Search Subdirectories check box.

The search directories are used during finalization that normally occurs after data collection completes. For new search directory paths to take effect, Intel VTune Profiler must resolve your results again with the new information. To do this:

  1. Select the Analysis Type tab.
  2. On the far right (directly below Start and Project Properties) select Re-resolve.

Sometimes sample counts may be displayed on source lines that are not normally associated with executable code, such as the closing brace of a for or while loop. Although this may appear to be an error, it's a result of the instructions the compiler generates. Viewing the assembly can reveal which instructions were associated with specific source lines.

 

Other times, assembly instructions may show that certain hardware events were collected on instructions that could not possibility generate that event, such as a memory event on a jump instruction or an arithmetic event on a memory instruction. This is known as event skid and is a result of the processor being unable to stop executing some micro-operations before sampling the instruction pointer. It results with the IP pointing at a subsequent instruction by the time the sample is taken. Typically, you can determine which instruction was responsible for the event by examining the instruction flow.

To import results into Intel VTune Profiler, first create a project:

 

  1. Select File > New > Project. A dialog box appears.
  2. Enter a project name and then select OK. The Project Properties dialog box appears. You do not need to specify the application name if you do not plan to collect additional data.
  3. To view the source of the imported results, specify where your source and binaries are located:
    a. Select the Search Directories tab.
    b. Select the menu, select All files, and then specify the directory where your files exist.
  4. To search recursively, select the Search Subdirectories check box.

The search directories are used during finalization that normally occurs after data collection completes. For new search directory paths to take effect, Intel VTune Profiler must resolve your results again with the new information. To do this:

  1. Select the Analysis Type tab.
  2. On the far right (directly below Start and Project Properties) select Re-resolve.

Sometimes sample counts may be displayed on source lines that are not normally associated with executable code, such as the closing brace of a for or while loop. Although this may appear to be an error, it's a result of the instructions the compiler generates. Viewing the assembly can reveal which instructions were associated with specific source lines.

 

Other times, assembly instructions may show that certain hardware events were collected on instructions that could not possibility generate that event, such as a memory event on a jump instruction or an arithmetic event on a memory instruction. This is known as event skid and is a result of the processor being unable to stop executing some micro-operations before sampling the instruction pointer. It results with the IP pointing at a subsequent instruction by the time the sample is taken. Typically, you can determine which instruction was responsible for the event by examining the instruction flow.

No, Intel® Media SDK is a foundation tool and is required while using these components in Windows.

No. Intel Media SDK is only used on systems with Intel® HD Graphics that support Intel® Quick Sync Video.

Yes, the compression standard in this SDK is optimized for video editing, transcoding, or video playback for streaming or video conferencing use models, where latency is a focus.

Currently there are no product differences between the commercial and open source version. We maintain one source base and build both versions from the same source base.

Intel® Threading Building Blocks is offered commercially for Windows*, Linux*, or macOS* customers who want additional support or who relicense under the open source GNU General Public License v2.0 with the runtime exception license.

For commercial products, we offer support through the Online Service Center.

For all other questions, visit the forum.

Are there analysis tools that understand the semantics of Intel® Threading Building Blocks (Intel® TBB)?

Yes. Applications threaded with Intel TBB can be analyzed with Intel® VTune™ Amplifier and Intel® Inspector. Intel® Advisor, available in any Intel® Parallel Studio XE product, can help find regions where performance can benefit the most from parallelism.

The Intel MPI Library supports only 64-bit apps on 64-bit operating systems on Intel® 64 architecture. For more details, see Deprecation Information.

The Intel MPI Library is known to run on AMD platforms but we do not validate functionality or performance on them.

The parallel file I/O (part of the MPI-2 standard) is fully implemented by the Intel MPI Library 5.0 or later. Some of the currently supported file systems include Unix file system* (UFS), Network File System (NFS), Parallel Virtual File System (PVFS), and Lustre*. For a complete list, see the release notes.

The Intel MPI Library supports one-sided communication for both active targets and passive targets. The only exception is the passive target one-sided communication where the target process does not call any MPI functions. Further support is available through the new one-sided calls and memory models in MPI-3.0.

The Intel MPI Library supports compilers from Intel, GNU* C, C++, Fortran 77 (3.3 or higher), and GNU* Fortran 95 (4.0 or higher). Additionally, the library provides a bundled source kit that supports PGI* C, PGI Fortran 77, and Absoft* Fortran 77 compilers out of the box with the following caveats:

  • The source files that PGI compiles must not transfer long double entities.
  • The build procedure based on Absoft* must use the -g77 and -B108 compiler options.
  • You must install and select the right compilers.
  • Install the respective compiler runtime on all nodes.

You may need to build extra binding libraries if you need support for PGI* C++, PGI* Fortran 95, and Absoft* Fortran 95 bindings. This additional binding kit is shipped with the full installation of the Intel MPI Library.

The Intel MPI Library supports OpenPBS*, PBS Pro*, Torque, LSF*, Parallelnavi*, NetBatch*, SLURM*, SGE*, LoadLeveler*, and Lava* batch schedulers. The simplified job startup command mpirun recognizes when it is run inside a session started by any PBS-compatible resource manager (like OpenPBS*, PBS Pro*, Torque*), as well as LSF*. See the reference manual (available in Get Started) for a description of this command.

The Intel MPI Library supports mixed MPI and OpenMP* applications. To use them, ensure that the thread-safe version of the library is linked to your application. For more information on running these applications, see our developer guides for Linux and Windows.

The MPI standard does not yet define proper handling of aborted MPI ranks. By default, the Intel MPI Library stops the entire application if any process exits abnormally. This behavior can be overwritten via a run-time option where the library allows an application to continue execution even if one process stops responding. For details and application requirements, see the reference manual.

The Intel MPI Library includes thread-safe libraries at the level MPI_THREAD_MULTIPLE. Several threads can make library calls simultaneously. Use the compiler driver -mt_mpi option to link to the thread-safe version. Use the thread-safe libraries if you request the thread support at the following levels:

MPI_THREAD_FUNNELED

MPI_THREAD_SERIALIZED

MPI_THREAD_MULTIPLE

Starting with Intel MPI Library 5.0 and later, the thread-safe version of the library will be linked by default at level MPI_THREAD_FUNNELED. You can change this manually in your application through the MPI_Init_thread() calls.

The Intel MPI Library supports clusters running different operating system versions or distributions, or an environment of mixed Intel® processors. It provides default optimizations depending on the detected architecture.

The Intel MPI Library supports one-sided communication for both active targets and passive targets. The only exception is the passive target one-sided communication where the target process does not call any MPI functions. Further support is available through the new one-sided calls and memory models in MPI-3.0.

The Intel MPI Library supports compilers from Intel, GNU* C, C++, Fortran 77 (3.3 or higher), and GNU* Fortran 95 (4.0 or higher). Additionally, the library provides a bundled source kit that supports PGI* C, PGI Fortran 77, and Absoft* Fortran 77 compilers out of the box with the following caveats:

  • The source files that PGI compiles must not transfer long double entities.
  • The build procedure based on Absoft* must use the -g77 and -B108 compiler options.
  • You must install and select the right compilers.
  • Install the respective compiler runtime on all nodes.

You may need to build extra binding libraries if you need support for PGI* C++, PGI* Fortran 95, and Absoft* Fortran 95 bindings. This additional binding kit is shipped with the full installation of the Intel MPI Library.

The Intel MPI Library supports OpenPBS*, PBS Pro*, Torque, LSF*, Parallelnavi*, NetBatch*, SLURM*, SGE*, LoadLeveler*, and Lava* batch schedulers. The simplified job startup command mpirun recognizes when it is run inside a session started by any PBS-compatible resource manager (like OpenPBS*, PBS Pro*, Torque*), as well as LSF*. See the reference manual (available in Get Started) for a description of this command.

The Intel MPI Library supports mixed MPI and OpenMP* applications. To use them, ensure that the thread-safe version of the library is linked to your application. For more information on running these applications, see our developer guides for Linux and Windows.

The MPI standard does not yet define proper handling of aborted MPI ranks. By default, the Intel MPI Library stops the entire application if any process exits abnormally. This behavior can be overwritten via a run-time option where the library allows an application to continue execution even if one process stops responding. For details and application requirements, see the reference manual.

The Intel MPI Library includes thread-safe libraries at the level MPI_THREAD_MULTIPLE. Several threads can make library calls simultaneously. Use the compiler driver -mt_mpi option to link to the thread-safe version. Use the thread-safe libraries if you request the thread support at the following levels:

MPI_THREAD_FUNNELED

MPI_THREAD_SERIALIZED

MPI_THREAD_MULTIPLE

Starting with Intel MPI Library 5.0 and later, the thread-safe version of the library will be linked by default at level MPI_THREAD_FUNNELED. You can change this manually in your application through the MPI_Init_thread() calls.

The Intel MPI Library supports clusters running different operating system versions or distributions, or an environment of mixed Intel® processors. It provides default optimizations depending on the detected architecture.

Yes. Unlike the Threading Advisor, the Vectorization Advisor works out of the box on both serial and multithreaded binaries.

No. Unlike the Threading Advisor, the Vectorization Advisor does not require source code modification to perform analyses. You can select loops for deeper analysis in the Survey Report. Note: Save time by annotating your code when you run dependencies or memory access patterns analyses on long-running applications. This allows you to skip the survey collection step.

Survey analysis is not intrusive and does not significantly slow down the application execution. However, the dependencies, memory access patterns (MAP), and other analysis types have significant overhead. There are several ways to mitigate application execution slowdown:

  • Decrease the workload. How you do that depends on your application. For example: provide less data to process, decrease computation complexity, and so on.
  • Use separate Project Properties settings for the survey and other analysis types. By default, it’s enough to configure only the survey analysis settings, but if you can control the workload via command line parameters, you can keep separate command line settings for different analysis types.
  • Decrease the number of selected loops for additional analyses.
  • Watch the Refinement Reports window while the analysis runs. As soon as you see the data you need, in VECTORIZATION WORKFLOW, click Stop.

The Vectorization Advisor has a complex structure of result versions. All the analysis results are saved in an experiment folder named e000. The base analysis type is Survey. All other analysis types depend on the Survey analysis result, but don’t depend on each other. Different analysis types are matched by an address in the target application binary. That means when you select loops in the survey report for deeper analysis (dependencies or MAP), the loops are identified by the address in the binary. Changing the binary (rebuilding) between running the survey, dependencies, and MAP analyses breaks this connection, producing incorrect results. The same applies to trip counts analysis. So if a binary is changed, rerun the survey analysis before running other analysis types.

The Vectorization Advisor has a complex structure of result versions. All the analysis results are saved in an experiment folder named e000. The base analysis type is Survey. All other analysis types depend on the Survey analysis result, but don’t depend on each other. Different analysis types are matched by an address in the target application binary. That means when you select loops in the survey report for deeper analysis (dependencies or MAP), the loops are identified by the address in the binary. Changing the binary (rebuilding) between running the survey, dependencies, and MAP analyses breaks this connection, producing incorrect results. The same applies to trip counts analysis. So if a binary is changed, rerun the survey analysis before running other analysis types.

No. Unlike the Vectorization Advisor, which works out of the box for both serial and multithreaded binaries, the Threading Advisor—specifically the suitability analysis and sometimes the dependencies analysis—works only on serial binaries. Note: You can convert a multithreaded OpenMP* binary to a serial binary by recompiling with the qopenmp-stubs option.

Yes. Annotations are a quick way for you to provide descriptions of a threading design to the Threading Advisor. They allow the feature to evaluate the design, forecast the performance gain, and highlight any synchronization issues. The good news is that the compiler ignores annotations and doesn't change the code behavior. So you can feel confident you’re not introducing threading errors or invalidating test cases as you explore alternative threading designs while continuing normal development and testing.

In the WORKFLOW pane, click Command Line to generate the command line for an analysis type and project settings. For details, see Command Line Interface Reference.

By default, this application stores only the most recent result of any given analysis type in a particular project. This means that rerunning the same analysis type as part of the same project completely replaces the appropriate data, with no ability to recover the old data. To manually save results in a read-only folder that you can browse any time, click Snapshot (the button with a camera image). New snapshots do not overwrite previous snapshots—access them with the Project Navigator.