Interoperability Issues

Coexistence of different versions

An application cannot use more than one instance of the Intel® Threading Building Blocks (Intel® TBB) runtime library within a process. If application modules (including 3rd party libraries used by the application) were built with different versions of Intel TBB, the application should use the runtime library which version is the biggest of all used or higher, and ensure it to be the first Intel TBB runtime library loaded into the application process. Using multiple instances of the runtime library, or using the library of a version smaller than required by any of the application modules, may result in errors at load time or at run time.

Interoperability with OpenMP*

If you use Intel TBB and OpenMP* constructs called one after the other at short intervals in the same program, and you use Intel® C++ Compiler for your OpenMP code, set the KMP_BLOCKTIME environment variable to a small value (20 milliseconds or less) to improve performance. You can also make this setting within your program via the kmp_set_blocktime() library call.

If you use Intel TBB and OpenMP together in a program on Linux* OS, setting thread affinity with the KMP_AFFINITY, OMP_PROC_BIND etc. environment variables or programmatically before the Intel TBB task scheduler is initialized may limit the default number of worker threads used by Intel TBB. To resolve the issue, update to Intel C++ Compiler 16.0 Update 1 or later and Intel TBB 4.4 Update 2 or later.

Other issues

If your application includes <tbb/enumerable_thread_specific.h> or <tbb/tbb.h> for offload execution on an Intel® Xeon Phi™ coprocessor, you may see link errors related to libpthread. To resolve this issue, use the following compiler option: /Qoffload-option,mic,compiler,"-pthread" on Windows* OS, or -qoffload-option,mic,compiler,"-pthread" on Linux* OS.

See the Intel C++ Compiler documentation for more details on the mentioned options, functions and environment variables.

For more complete information about compiler optimizations, see our Optimization Notice.