Intel® C and C++ Compilers

Leadership application performance

  • Rich set of components to efficiently implement higher-level, task-based parallelism
  • Future-proof applications to tap multicore and many-core power
  • Compatible with multiple compilers and portable to various operating systems

Performance without compromise

  • Industry leading performance on Intel and compatible processors.
  • Extensive optimizations for the latest Intel processors, including Intel® Xeon Phi™ coprocessor
  • Scale forward with support multi-core, manycore and multiprocessor systems with OpenMP, automatic parallelism, and Intel Xeon Phi coprocessor support
  • Patented automatic CPU dispatch feature gets you code optimized for the current running processor runs code optimized for specified processors identified at application runtime.
  • Intel® Performance Guide provides suggestions for improving performance in your Windows* applications.

Broad support for current and previous C and C++ standards, plus popular extensions

  • Language support with full C++11 and most C99 support. For details on C++11, see
  • Extensive OpenMP 4.0* support

Faster, more scalable applications with advanced parallel models and libraries

Intel provides a variety of scalable, easy to use parallel models. These highly abstracted models and libraries simplify adding both task and vector parallelism. The end result is faster, more scalable applications running on multi-core and manycore architectures.

Intel® Cilk™ Plus (included with Intel C++ compiler)

  • Simplifies adding parallelism for performance with only three keywords
  • Scale for the future with runtime system operates smoothly on systems with hundreds of cores.
  • Vectorized and threaded for highest performance on all Intel and compatible processors
  • Click here for sample code, contributed libraries, open specifications and other information from the Cilk Plus community.
  • Included with Intel C++ compiler and available in GCC 4.9 development branch (with –fcilkplus and the caveat that Cilk_for is not supported yet in a Clang*/LLVM* project at
  • More information

OpenMP 4.0 (included with Intel C++ compiler)

  • Support for most of the new features in the OpenMP* 4.0 API Specification (user-defined reductions not yet supported)
  • Support for C, C++, and Fortran OpenMP programs on Windows*, Linux*, and OS X*
  • Complete support for industry-standard OpenMP pragmas and directives in the OpenMP 3.1 API Specification
  • Intel-specific extensions to optimize performance and verify intended functionality
  • Intel compiler OpenMP libraries are object-level compatible with Microsoft Visual C++* on Windows and GCC on Linux*

Intel® Math Kernel Library

  • Vectorized and threaded for highest performance using de facto standard APIs for simple code integration
  • C, C++ and Fortran compiler-compatible with royalty-free licensing for low cost deployment
  • More information

Intel® Integrated Performance Primitives

  • Performance: Pre-optimized building blocks for compute-intensive tasks
  • A consistent set of APIs that support multiple operating systems and architectures
    • Windows*, Linux*, Android*, and OS X*
    • Intel® Quark™, Intel® Atom™, Intel® Core™, Intel® Xeon®, and Intel® Xeon Phi™ processors
  • More information

Intel® Threading Building Blocks

  • Rich set of components to efficiently implement higher-level, task-based parallelism
  • Compatible with multiple compilers and portable to various operating systems
  • More information

Intel® Media SDK 2014 for Clients

  • A cross-platform API for developing consumer and professional media applications.
  • Intel® Quick Sync Video: Hardware-accelerated video encoding, decoding, and transcoding.
  • Development Efficiency: Code once now and see it work on tomorrow's platforms.
  • More information

A drop-in addition for C and C++ development

  • Windows*
    • Develop, build, debug and run from the familiar Visual Studio IDE
    • Works with Microsoft Visual Studio* 2008, 2010, 2012 and 2013
    • Source and binary compatible with Visual C++*
  • Linux*
    • Develop, build, debug and run using Eclipse* IDE interface or command line
    • Source and binary compatible with GCC
  • OS X*
    • Develop, build, debug and run from the familiar Xcode* IDE
    • Works with Xcode 4.6, 5.0 and 5.1
    • Source and binary compatible with LLVM-GCC and Clang* tool chains
  • 32-bit and 64-bit development included

  1. Project and source in Visual Studio
  2. C/C++ aware text editor
  3. Debug C/C++ code
  4. Call Stack information
  5. Set breakpoints at certain source lines on IDE.

Outstanding support

One year of support included with purchase – gives you access to all product updates and new versions released in the support period plus access to Intel Premier Support. There's a very active user forum for help from experienced users and Intel engineers

  • Videos on Getting Started with Intel® C++ Compiler
  • Vectorization Essentials
  • Performance Essentials with OpenMP 4.0 Vectorization
  • View slides

Register for future Webinars

Previously recorded Webinars:

  • Update Now: What’s New in Intel® Compilers and Libraries
  • Performance essentials using OpenMP* 4.0 vectorization with C/C++
  • Intel® Cilk™ Plus Array Notation - Technology and Case Study Beta
  • OpenMP 4.0 for SIMD and Affinity Features with Intel® Xeon® Processors and Intel® Xeon Phi™ Coprocessor
  • Introduction to Vectorization using Intel® Cilk™ Plus Extensions
  • Optimizing and Compilation for Intel® Xeon Phi™ Coprocessor

Featured Articles

Nenhum conteúdo foi encontrado

More Tech Articles

Getting Started with Intel Compiler Pragmas and Directives
Por AmandaS (Intel)Publicado em 11/25/20130
Compiler Methodology for Intel® MIC Architecture Getting Started with Intel Compiler Pragmas and Directives Overview Compiler options allow a user to control how source files are interpreted and control characteristics of the object files or executables.  Compiler options are applied to an en...
Advanced Optimizations for Intel® MIC Architecture
Por AmandaS (Intel)Publicado em 11/25/20130
Compiler Methodology for Intel® MIC Architecture Advanced Optimizations Overview This chapter details some of the advanced compiler optimizations for performance on Intel® MIC Architecture AND most of these optimizations are also applicable to host applications. This chapter includes topics su...
Advanced Optimizations for Intel® MIC Architecture, Low Precision Optimizations
Por AmandaS (Intel)Publicado em 11/25/20130
Compiler Methodology for Intel® MIC Architecture Advanced Optimizations for Intel® MIC Architecture, Low Precision Optimizations Overview The latest Intel Compilers (released after the 13.0.039 Beta Update 1 release) do not generate low-precision sequences unless low-precision options are adde...
OpenMP Related Tips
Por AmandaS (Intel)Publicado em 11/25/20130
Compiler Methodology for Intel® MIC Architecture OpenMP Related Tips OpenMP* Loop Collapse Directive   Use the OpenMP collapse-clause to increase the total number of iterations that will be partitioned across the available number of OMP threads by reducing the granularity of work to be done...
Assine o Artigos do Espaço do desenvolvedor Intel

Supplemental Documentation

Nenhum conteúdo foi encontrado
Assine o Artigos do Espaço do desenvolvedor Intel

You can reply to any of the forum topics below by clicking on the title. Please do not include private information such as your email address or product serial number in your posts. If you need to share private information with an Intel employee, they can start a private thread for you.

New topic    Search within this forum     Subscribe to this forum

induction variable elimination
Por unclejoe3
It seems induction variable elimination is a well known compiler transformation, but I can't get ICC to do it, nor with GCC. Here's my test program: int main(int argc, char **argv) { typedef int64_t /*int32_t*/ LoopType; LoopType REPEATS = 1000000, N = atoi(argv[1]); int16_t *data = (int16_t *)alloca(N * sizeof(int16_t)); for (int j = 0; j < REPEATS; ++j) { for (LoopType i = 0; i < N; i += (LoopType)8) { __m128i d = _mm_loadu_si128((__m128i *)&data[i]); d = _mm_add_epi16(d, d); _mm_storeu_si128((__m128i *)&data[i], d); } } return data[5]; }assembly code for inner most loop: ..B1.4:                         # Preds ..B1.2 ..B1.4         movdqu    (%rsi), %xmm0                                 #50.30         incq      %rdi                                          #53.5         paddw     %xmm0, %xmm0                                  #56.11         movd...
icc generates wrong instructions for MIC
Por Lei Z.6
Hi, I'm trying to compile some code for MIC, which uses the extended integer type __uint128_t. And icc gave me the following error message: ******** warning #13376: XMM registers are not supported on target platform ******** error #13393: Opcode unsupported on target architecture: movsd I wrote a snippet of sample code, which could reproduce the error: --------------------------------------- #include int main() { double d; __uint128_t i = 0; d = i; printf("%f\n", d); } ---------------------------------------- The code above can be compiled correctly by gcc, as well as by icc when targeting x86 platform instead of MIC. It seems that icc was trying to generate some SSE instructions, which is not supported by MIC. This looks like a bug. I hope your dev team could solve this or give me some workaround. Thanks.
Linux: Code analysis tools and tool chains supporting Intel (parallel) Studio
Por Sebastian L.0
I'm planning a tool chain for a C++ project and looking for a good and payable solution for static code analysis and code style check tools. Our project uses Jenkins for continuous integration, Stash git for version control, cppunit for testing and Artifactory for SW-deployment. Everthing is embedded into RedHat (RHEL 7). The target application is a MPI parallelised C++ code. The problem: Typical tools e.g. cppcheck, can be configured to support Intel libraries (IPP / MKL / IMPI syntax), but it has to be done by hand and does not work properly. The Intel static code analyzer provided by the Inter C/C++ Compiler icpc/icc can (of course) not be tailored to check code style or project specific issues. My question: does anybody know a tool, which can both, code analysis and style check *and* is ready for use with Jenkins or other automated build chains? As an example: our Java experts use SonarQube, where a very expensive C++ tool chain can be purchased (several k$ per year). I've also ...
icc can't compile memmem
Por Lei Z.1
Hi, When compiling a source file with the use of function memmem (from C standard lib), icc fails and gives such error info: *****error #140: too many arguments in function call***** Actually the use of memmem in the code is straightforward and correct. Both gcc and clang can compile it with no problem. I wrote a piece of sample code: -------------------------------------------------------- #include const char *needle = "needle"; const char *hay = "needle in haystack"; int main() { void *pos = memmem(hay, strlen(hay), needle, strlen(needle)); } ---------------------------------------------------------- You can try it and see if you have same problem. My specs are: OS: Linux 2.6.32-358 (x86_64) icc: 14.0.2 glibc: 2.12 Do you have any ideas about this problem? Thanks in advance.
Inlining effect on Inside/Outside class definition
Por velvia3
Hi, If you have a method and you want to give the compiler a hint that it is a good idea to inline it, you currently have 2 solutions. The first one is to define the methods when you declare your class: class Vector { private: double* data_; double* size_; double* capacity_; public: double& operator[](int k) { return data_[k]; } ... }As this method might reduce readability, another solution is to use the inline keyword and define the method out of class: class Vector { private: double* data_; double* size_; double* capacity_; public: inline double& operator[](int k); ... } double& Vector::operator[](int k) { return data_[k]; }This makes the code more readable (at least I prefer it). Reading my STL implementation, I found that they use a mix of the two. Some methods (those which I think should really be inlined) are defined in the class, and others are defined out of class with the inline keyword. The file also begi...
__FUNCTION__ is not treated as string literal for the purposes of string concatenation
Por Scott Slack-smith0
I'm trying to compile some MSC code that contains the following pragma: void myFunc(int a, double b) {   #pragma comment(linker, "/EXPORT:"__FUNCTION__"="__FUNCDNAME__",PRIVATE")  ...  ... which handy to define a function alias for an exported function (it's a simpler alternative than creating a MSC .def file which is difficult to maintain). However the Intel compiler (Version can't concatenate "/EXPORT" with __FUNCTION__ E.g. if you compile the following int _tmain(int argc, _TCHAR* argv[]) {   #pragma message("/EXPORT:"__FUNCTION__"="__FUNCDNAME__",PRIVATE")   #pragma message(__FUNCTION__"="__FUNCDNAME__",PRIVATE")  return 0; } the output is 1>------ Build started: Project: ConsoleApplication1 (Intel C++ 15.0), Configuration: Debug Win32 ------ 1>  ConsoleApplication1.cpp 1>  /EXPORT: 1>  wmain=wmain,PRIVATE ========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========   I did see an another forum item on the same topic, namely https://so...
Variadic constructor with enable_if not found? (C++11)
Por Daniel V.4
I'm having a bit of trouble with the following C++11 code fragment and icpc 15.0.0. This example is compiled correctly by g++ 4.9. I essentially have a templated vector class (with its scalar type T and size N being template arguments). The variadically templated constructor that checks whether all the arguments are assignable to T& is not found. When I remove the assignable check (which I cannot do because otherwise an ambiguity occurs with the copy-constructor when N=1) the example compiles. The manually expanded version for 2 arguments compiles. Any idea for a work-around. // > icpc -v // icpc version 15.0.0 (gcc version 4.9.0 compatibility) // > icpc -std=c++11 i_dont_know_what_is_wrong_with_intel.cpp // i_dont_know_what_is_wrong_with_intel.cpp(39): error: no instance of constructor "Dummy<T, N>::Dummy [with T=double, N=2U]" matches the argument list // argument types are: (double, double) // Dummy<double, 2> t{1.0, 2.0}; // ...
LNK1104 When trying to build on Windows network share
Por Neil B.1
Hi,     I can't build when the output directory for my project is on a Windows network share. The build log contains the following:      1>LINK : : error LNK1104: cannot open file '\psf\Hacking\merely\Client Names\Blotter\Debug\Utils.lib'      1>Done Building Project "\\psf\Hacking\merely\Client Names\Blotter\Utils\Utils.vcxproj" (Build target(s)) -- FAILED. The LNK1104 error shows only one initial backslash in the output location. So it obviously can't do the output.   If I change the output directory to a drive the lib builds correctly. Other than mapping a drive, which for other reasons I'd rather not do, does anyone know of a workaround? Thanks Neil  
Assine o Fóruns

You can reply to any of the forum topics below by clicking on the title. Please do not include private information such as your email address or product serial number in your posts. If you need to share private information with an Intel employee, they can start a private thread for you.

New topic    Search within this forum     Subscribe to this forum

Data race problem
Por Ömer Faruk Kalkan4
P1 and P2 data race problems  are appear when I inspect program . I haven`t got experience about Cilk Plus. I guess error is occur in nested loop . What should I do ? char * primary; char * secondary; primary = ( char * ) malloc (size * 15); secondary = ( char * ) malloc (size); cilk::reducer_opadd<long>number (0); . . . . cilk_for (int i= 0; i < size ; ++i) { cilk_for (int j = 14; j >= 0; --j ) { number += (primary [(14 - j ) + ( i * 15)] * (pow (3 , j ))); } if (secondary [number.get_value ()] == 0) { secondary [number.get_value ()] = 1; } number.set_value (0); } . . . . 
cilk_spawn inside cilk_for
Por Meir F.2
Hi, For some reason, whenever I have a spawn and sync inside a cilk_for it seems as though the spawn does not get recognized.  I end up getting a compile time error of Expected _Cilk_spawn before _Cilk_sync.  As an example consider the following (overly simple) program: void foo(){ cout << "foo"; } void bar(){ cout << "bar"; } void baz(){ cout << "baz"; } int main(){ cilk_for(int i-0; i<10; i++){ cilk_spawn foo(); bar(); cilk_sync; baz(); } }  If I try to compile this I get the above mentioned error.  Does anyone have any idea as to why this might be happening and/or how I can solve it.  (If I move the spawn and sync into a separate helper method it solves the problem, but unfortunately in my real use case it would mean passing a lot of variables by pointer.)  I am using g++ 4.8.1 with cilkplus. Thanks!   - Meir
Cilk™ Plus Trademark License for product distribution
Por Tam N.1
Dear all, I need a help to make clear with my customer about Cilk Plus license. I have bought a license of Intel Parallel Studio XE 2013 (contains Cilk Plus) to develop a product for my customer. Now, my customer want to distribute the product to market. Mustn't my customer need to buy a license because I bought it ? can you give for me some evidences to me negotiate with my customer ? Thanks, Tam Nguyen
Por Tim Prince2
I've been trying to understand what the implicit_index intrinsic may be intended for.  It's tricky to get adequate performance from it, and apparently not possible in some of the more obvious contexts (unless the goal is only to get a positive vectorization report). It seems to be competitive for the usage of setting up an identity matrix. In the context of dividing its result by 2, different treatments are required on MIC and host: #ifdef __MIC__       a[2:i__2-1] = b[2:i__2-1] + c[((unsigned)__sec_implicit_index(0)>>1)+1] * d__[2:i__2-1]; #else       a[2:i__2-1] = b[2:i__2-1] + c[__sec_implicit_index(0)/2+1] * d__[2:i__2-1]; #endif That is, the unsigned right shift is several times as fast as the divide on MIC (and not much slower than plain C code), while the signed divide by 2 is up to 60% faster on host (but not as fast as C code). The only advantage in it seems to be the elimination of a for(), if in fact that is considered to be an advantage. I didn't see documented...
Set Worker on Windows with Intel Core i3
Por Tam N.3
Hi all, I have used Cilk Plus to make my code computing parallel. But PC is installed Windows XP3, Intel Core i3, how many workers should I set to make the best performance for my code ? Thanks of all, Tam Nguyen
Converting Cilkview data to seconds
Por Matthew D.2
Hey folks, I'm working with a system that needs the work and span of the programs I'm running in seconds and nanoseconds to run properly, and running Cilkview on them gives me work and span in processor instructions. Does anyone know of a way to convert that data from instructions to a unit of time? Thanks! Matt
Less performance on 16 core than on 4 ?!
Por sdfsadfasdf s.2
Hi there, I evaluated my cilk application using "taskset -c 0-(x-1) MYPROGRAM) to analyze scaling behavior.   I was very suprised to see, that the performances increases up to a number of cores but decreases afterwards. for 2 Cores, I gain a speedup of 1,85. for 4, I gain 3.15. for 8 4.34 - but with 12 cores the performance drops down to a speedup close to the speedup gained by 2 cores (1.99). 16 cores performe slightly better (2.11) How is such an behaviour possible? either an idle thread can steal work or it cant?! - or may the working packets be too coarse grained and the stealing overhead destroys the performance with too many cores in use?!
Exception when run project at debug mode using cilk_for
Por Tam N.1
Dear all, I have used cilk_plus to make parallel processing into my source code with visual studio 2008 IDE. But when I build it at debug mode, the project throw an exception below: "Run-Time Check Failure #0 - The value of ESP was not properly saved across a function call. This is usually a result of calling a function pointer declared with a different calling convention" How can  I resolve it to make debug mode operated ? Thanks of all, Tam Nguyen  
Assine o Fóruns