Intel® C and C++ Compilers

Leadership application performance

  • Rich set of components to efficiently implement higher-level, task-based parallelism
  • Future-proof applications to tap multicore and many-core power
  • Compatible with multiple compilers and portable to various operating systems

Performance without compromise

  • Industry leading performance on Intel and compatible processors.
  • Extensive optimizations for the latest Intel processors, including Intel® Xeon Phi™ coprocessor
  • Scale forward with support multi-core, manycore and multiprocessor systems with OpenMP, automatic parallelism, and Intel Xeon Phi coprocessor support
  • Patented automatic CPU dispatch feature gets you code optimized for the current running processor runs code optimized for specified processors identified at application runtime.
  • Intel® Performance Guide provides suggestions for improving performance in your Windows* applications.

Broad support for current and previous C and C++ standards, plus popular extensions

  • Language support with full C++11 and most C99 support. For details on C++11, see http://software.intel.com/en-us/articles/c0x-features-supported-by-intel-c-compiler
  • Extensive OpenMP 4.0* support

Faster, more scalable applications with advanced parallel models and libraries

Intel provides a variety of scalable, easy to use parallel models. These highly abstracted models and libraries simplify adding both task and vector parallelism. The end result is faster, more scalable applications running on multi-core and manycore architectures.

Intel® Cilk™ Plus (included with Intel C++ compiler)

  • Simplifies adding parallelism for performance with only three keywords
  • Scale for the future with runtime system operates smoothly on systems with hundreds of cores.
  • Vectorized and threaded for highest performance on all Intel and compatible processors
  • Click here for sample code, contributed libraries, open specifications and other information from the Cilk Plus community.
  • Included with Intel C++ compiler and available in GCC 4.9 development branch (with –fcilkplus and the caveat that Cilk_for is not supported yet in a Clang*/LLVM* project at http://cilkplus.github.io/.
  • More information

OpenMP 4.0 (included with Intel C++ compiler)

  • Support for most of the new features in the OpenMP* 4.0 API Specification (user-defined reductions not yet supported)
  • Support for C, C++, and Fortran OpenMP programs on Windows*, Linux*, and OS X*
  • Complete support for industry-standard OpenMP pragmas and directives in the OpenMP 3.1 API Specification
  • Intel-specific extensions to optimize performance and verify intended functionality
  • Intel compiler OpenMP libraries are object-level compatible with Microsoft Visual C++* on Windows and GCC on Linux*

Intel® Math Kernel Library

  • Vectorized and threaded for highest performance using de facto standard APIs for simple code integration
  • C, C++ and Fortran compiler-compatible with royalty-free licensing for low cost deployment
  • More information

Intel® Integrated Performance Primitives

  • Performance: Pre-optimized building blocks for compute-intensive tasks
  • A consistent set of APIs that support multiple operating systems and architectures
    • Windows*, Linux*, Android*, and OS X*
    • Intel® Quark™, Intel® Atom™, Intel® Core™, Intel® Xeon®, and Intel® Xeon Phi™ processors
  • More information

Intel® Threading Building Blocks

  • Rich set of components to efficiently implement higher-level, task-based parallelism
  • Compatible with multiple compilers and portable to various operating systems
  • More information

Intel® Media SDK 2014 for Clients

  • A cross-platform API for developing consumer and professional media applications.
  • Intel® Quick Sync Video: Hardware-accelerated video encoding, decoding, and transcoding.
  • Development Efficiency: Code once now and see it work on tomorrow's platforms.
  • More information

A drop-in addition for C and C++ development

  • Windows*
    • Develop, build, debug and run from the familiar Visual Studio IDE
    • Works with Microsoft Visual Studio* 2008, 2010, 2012 and 2013
    • Source and binary compatible with Visual C++*
  • Linux*
    • Develop, build, debug and run using Eclipse* IDE interface or command line
    • Source and binary compatible with GCC
  • OS X*
    • Develop, build, debug and run from the familiar Xcode* IDE
    • Works with Xcode 4.6, 5.0 and 5.1
    • Source and binary compatible with LLVM-GCC and Clang* tool chains
  • 32-bit and 64-bit development included

  1. Project and source in Visual Studio
  2. C/C++ aware text editor
  3. Debug C/C++ code
  4. Call Stack information
  5. Set breakpoints at certain source lines on IDE.

Outstanding support

One year of support included with purchase – gives you access to all product updates and new versions released in the support period plus access to Intel Premier Support. There's a very active user forum for help from experienced users and Intel engineers

  • Videos on Getting Started with Intel® C++ Compiler
  • Vectorization Essentials
  • Performance Essentials with OpenMP 4.0 Vectorization
  • View slides

Register for future Webinars


Previously recorded Webinars:

  • Update Now: What’s New in Intel® Compilers and Libraries
  • Performance essentials using OpenMP* 4.0 vectorization with C/C++
  • Intel® Cilk™ Plus Array Notation - Technology and Case Study Beta
  • OpenMP 4.0 for SIMD and Affinity Features with Intel® Xeon® Processors and Intel® Xeon Phi™ Coprocessor
  • Introduction to Vectorization using Intel® Cilk™ Plus Extensions
  • Optimizing and Compilation for Intel® Xeon Phi™ Coprocessor

Featured Articles

Nenhum conteúdo foi encontrado

More Tech Articles

error #1909: complex integral types are not supported
Por Milind Kulkarni (Intel)Publicado em 07/17/20090
Problem:This is a small piece of code:-- (exam.c)extern int __acc_cta[1-2*!(sizeof(0i64) >= 8)];extern int __acc_cta[1-2*!(sizeof(0ui64) >= 8)];int main(){return 0;}compile using 10.1.030 or lesser compiler in windows:-icl /Qc99 exam.cexam.cexam.c(1): error #1909: complex integral types are...
nm can't list symbol name for the object file generated by icc -ipo option
Por yang-wang (Intel)Publicado em 07/12/20090
When building some linux applications using the “-ipo” option, the libtool reports “syntax error in VERSION script”. The root cause is that nm can’t get correct symbol table from the object file generated by icc ‘–ipo’ option build
Windows C++ Compiler and Sealed or abstract unions
Por Publicado em 07/08/20090
The sealed union is not allowed by C++ language.
Namespace-scope using-declarations for class member types
Por Publicado em 07/08/20090
In Microsoft compatibility, the namespace-scope using-declarations for class member types are no longer accepted by Intel C++ compiler.
Assine o Artigos do Espaço do desenvolvedor Intel

Supplemental Documentation

Nenhum conteúdo foi encontrado
Assine o Artigos do Espaço do desenvolvedor Intel

You can reply to any of the forum topics below by clicking on the title. Please do not include private information such as your email address or product serial number in your posts. If you need to share private information with an Intel employee, they can start a private thread for you.

New topic    Search within this forum     Subscribe to this forum


Installation of Parallel Studio XE Composer Edition c++ 2015 fails on clean Windows 8.1 system
Por BURGI Berlin9
Hello, I have problems installing the Parallel Studio XE 2015 Composer Edition for C++. The installation ends immediately after displaying the setup splash screen for about 1 second, also after a clean Windows 8.1 installation. The last lines in the log files are always the same: ... [t13b4 2015.03.23 10.56.01 0000003f] [message_processor]: INFO: Registered: zip (0.0.0.0) [t13b4 2015.03.23 10.56.01 00000040] [message_processor]: INFO: Registered: zip script (0.0.0.0) [t13b4 2015.03.23 10.56.01 00000041] [message_processor]: CRITICAL: Failed to load plugin: action [t13b4 2015.03.23 10.56.01 00000042] [message_processor]: CRITICAL: Failed to load plugin: cache [t13b4 2015.03.23 10.56.01 00000043] [message_processor]: CRITICAL: Failed to load plugin: catalog [t13b4 2015.03.23 10.56.01 00000044] [message_processor]: CRITICAL: Failed to load plugin: sizer [t13b4 2015.03.23 10.56.01 00000045] [message_processor]: CRITICAL: Failed to load plugin: system/string list -END OF LOG- ...
Centos support
Por andrei k.4
Hello, which Parallel Studio SP1 update is compatible with my Centos OS 6.5. I tried to download Sp1 update 4 but the install says my OS is not supported. I was not able to find XE dedicated forum to post it
Disable loop-reordering/ loop-interchange
Por Paul S.1
Hello, is it possible to disable loop-reordering/ loop-interchange while compiling with -O3? Thank you, Paul
xspace options for bi-endian compiler
Por Vijaya V.3
I want to implement xspace protection for an image, where I am compiling my code with Intel bi-endian compiler. Is there any option available for the same. Xspace Protection: For each segment I need to provide the access permissions as like mentioned below TEXT Segment : Read & Execute Permission   -----> Code RODATA :  Read Only Permission  -------> constants, String Literals, etc RWDATA : Read & Write Permission ---------> BSS, Data , Heap,Stack Segments. Please provide the info for the same.    
Bug: legitimate code with variadic templates and variadic template aliases leads to an error.
Por Mikhail K.1
Hi, I've got an issue with the following code: In test.cpp template <typename... Types> struct Variadic1 {}; template <typename... Types> using MakeVariadic1 = Variadic1<Types...>; template <typename... Types> struct Variadic2 {}; template <typename... Types> using MakeVariadic2 = Variadic2<MakeVariadic1<Types...>>; template <typename... Types> MakeVariadic2<Types...> test(Types...) { return MakeVariadic2<Types...>(); } int main() { test(1); } Command line and output: ~/tmp$ ~/soft/intel/system_studio_2015.2.050/bin/ia32/icpc -std=c++11 test.cpp -o test test.cpp(21): error: template instantiation resulted in unexpected function type of "MakeVariadic2<int> (int)" (the meaning of a name may have changed since the template declaration -- the type of the template is "MakeVariadic2<Types...> (Types...)") test(1); ^ ...
Is it better to use IPP or ICL's vectorizer?
Por meldaproduction4
The ICL's vectorizer seems to be very good, which makes me think whether it makes sense to use IPP (performance primitives) for simple tasks such as for (int i=0; i<cnt; i++) dst[i] = src1[i] * src2[i]; I assume to use SSE2 as base architecture and AVX for dispatching.  
/Qipo seems disabling automatic vectorization
Por meldaproduction6
Hi, I depend a lot on SSE/AVX auto-vectorization and it seems that /Qipo disables it. These are relevant parameters I'm using: /arch:SSE2 /QxSSE2 /Qvec-report /QaxAVX /Qftz Tye compiler reports lots of loops being vectorized. But if I add /Qipo, it states that the messages will be generated by linker (makes sense), but the linker reports nothing... (I'm not adding /Qvec-report to it though, doesn't seem logical anymore) Thanks!
Intel C++ Compiler warnings on Windows with MSVC
Por meldaproduction3
Hi, I'm trying the intel compiler (normally use MSVC 2013), and I get lots of warnings, pretty always. I added "/Qvc12", but that doesn't seem to make difference (/Qvc10 made the compiler dysfunctional). Any ideas? Here are the warnings I always get:   C:/Program Files (x86)/Microsoft Visual Studio 12.0/Vc/include/stddef.h(29): war ning #2157: NULL defined to 0 (type is integer not pointer) #define NULL 0 ^ C:/Program Files (x86)/Windows Kits/8.1/Include/um/winnt.h(5756): warning #161: unrecognized #pragma #pragma prefast(push) ^ C:/Program Files (x86)/Windows Kits/8.1/Include/um/winnt.h(5758): warning #161: unrecognized #pragma #pragma prefast(disable: 6001 28113, "The barrier variable is accessed only to create a side effect.") ^ C:/Program Files (x86)/Windows Kits/8.1/Include/um/winnt.h(5773): warning #161: unrecognized #pragma #pragma prefast(pop) ^ C:/Program Files (x86)/Windows Kits/8.1/Include/um/winbase.h(881...
Assine o Fóruns

You can reply to any of the forum topics below by clicking on the title. Please do not include private information such as your email address or product serial number in your posts. If you need to share private information with an Intel employee, they can start a private thread for you.

New topic    Search within this forum     Subscribe to this forum


developer documents for Cilk Plus
Por Romanov A.0
Hi, First I would like to thank you all for the awesome cilk plus tools you have open source in GCC and LLVM. I am trying to study the runtime library and finding it a bit difficult to follow the execution in a sample application. Are there any developer documents available? A wiki perhaps. Specifically, I am trying to trace the execution path for cilk_spawn which is a key word. Any helpful links to get me started would be really great! Thanks, Arya
Question about steal-continuation semantics in Cilk Plus, Global counter slowing down computation, return value of functions
Por Robert M.5
1) What I understood about steal-continuation is, that every idle thread does not actually steal work, but the continuation which generates a new working item. Does that mean, that inter-spawn execution time is crucial? If 2 threads are idle at the same time, from what I understand only one can steal the continuation and create its working unit, the other thread stays idle during that time?! 2) As a debugging artefact, I had a global counter incremented on every function call of a function used within every working item. I expect this value to be wrong (e.g. lost update), as it is not protected by a lock. what I didn't expect was execution time being 50% longer. Can somone tell me, why this is the case? 3) Du I assume correctly, that a cilk-spwaned function can never (directly) return a result, as the continuation might continue in the mean time and one would never know when the return value is actually written?
Cilk plus implicit threshold
Por Guilherme R.1
Hi, I'm new to cilk, and i wanted to ask if it has an implicit threshold for the task creation, in recursive computations like fib? If so, is it based on the number of tasks created, or in the depth of the computation?   Thanks!
How to make this reduction in Cilk Plus?
Por Ioannis E. Venetis10
Hello, I have code that is structured like this: float A[3], X[M], Y[M], Z[M], OUTX[N], OUTY[N], OUTZ[N]; for (i = 0; i < N; i++) { // Use other arrays and i as an index to these arrays to initialize A[0], A[1], A[2] for (j = 0; j < M; j++) { // Calculate new values for A[0], A[1], A[2] // using more arrays where i and/or j are used as indexes X[j] += A[0]; Y[j] += A[1]; Z[j] += A[2]; } OUTX[i] = A[0]; OUTY[i] = A[1]; OUTZ[i] = A[2]; }I have successfully parallelized the outer loop using OpenMP, making the array A private and adding the atomic directive before the updates to the elements of X, Y and Z (using critical was actually worse). But now I would like to try this code out using Cilk Plus. Although I have read all the documentation about reducers and reduction operations in Cilk Plus, I still cannot formulate in my mind how the above code could be implemented in Cilk Plus. I would like to replace the outer loop with a cilk_for and have ...
simple cilk_spawn Segmentation Fault
Por Chris Szalwinski1
I'm having difficulty running a simple test case using cilk_spawn.  I'm compiling under gcc 4.9.0 20130520. The following fib2010.cpp example, executes in 0.028s without cilk and takes 0.376s with cilk as long as I set the number of workers to 1.  If I change the number of workers to any number greater than one, I get a segmentation fault. // fib2010.1.cpp // #include <iostream> #include <cilk/cilk.h> #include <cilk/cilk_api.h> int fib(int n) { if (n < 2) return n; int x = cilk_spawn fib(n-1); int y = fib(n-2); cilk_sync; return x + y; } int main(int argc, char* argv[]) { std::cout << "No of workers = " << __cilkrts_get_nworkers() << std::endl; int n = 32; std::cout << "fib(" << n << ") = " << fib(n) << std::endl; }  The hardware is Dual Core AMD Opteron 8220.
cilk_for segmentation fault
Por Chris Szalwinski5
Hi, I'm having difficulty comparing cilk_for with cilk_spawn.  The following cilk_spawn code executes as I expect for command line arguments like 1000000 30 // Recursive Implementation of Map // r_map.3.cpp #include <iostream> #include <iomanip> #include <cstdlib> #include <ctime> #include <cmath> #include <cilk/cilk.h> const double pi = 3.14159265; template<typename T> class AddSin { T* a; T* b; public: AddSin(T* a_, T* b_) : a(a_), b(b_) {} void operator()(int i) { a[i] = b[i] + std::sin(pi * (double) i / 180.) + std::cos(pi * (double) i / 180.) + std::tan(pi * (double) i / 180.); } }; template <typename Func> void r_map(int low, int high, int grain, Func f) { if (high - low <= grain) for (int i = low; i < high; i++) f(i); else { int mid = low + (high - low) / 2; cilk_spawn r_map(low, mid, grain, f); } } int main(int argc, char** argv) { if (argc != 3) { std::cerr << "Incorrect number of a...
Floating Point ABI
Por Nick T.2
Hello I noticed in the latest CilkPlus ABI specification (https://www.cilkplus.org/sites/default/files/open_specifications/CilkPlu...), it says that the caller to the library must set the floating point flags (top of page 8). This is what the LLVM implementation of CilkPlus and its runtime do, but the current Intel version of the run-time has the code to save the floating point status registers that is in LLVM's code generator and not the runtime from the LLVM repository. Please could you tell me whether: a) The floating point status flags should be set/saved by the caller b) The floating point status flags should be set/saved by the callee c) There's something I've overlooked The ABI says: "/** * Architecture - specific floating point state. mxcsr and fpcsr should be * set when CILK_SETJMP is called in client code. Note that the Win64 * jmpbuf for the Intel64 architecture already contains this information * so there is no need to use these fields on that OS/architecture. */" T...
How can I parallelize implicit loop ?
Por Zvi Danovich (Intel)1
I have the loop, inside its body running the function with array member (dependent on loop index) as an argument, and returning one value. I can parallelized this loop by using cilk_for() operator instead of regular for() - and it is simple and works well.  This is explicit parallelization.  Instead of explicit loop instruction I can use Array Notation contruction (as shown below) - it is implicit loop. My routine is relatively long and complecs, and has Array Notation constructions inside, so it cannot be declared as a vector (elemental) one. When I use implicit loop - it is not parallelized, the run time is increased substantially. float foo(float f_in) {  float f_result;  // LONG computation containing CILK+ Array Notation operations  /////////////////////////////////////////////////////////  return f_result; } int main() {  float af_in[n], af_out[n]; // Explicit parallelized loop  cilk_for(int i=0; i<n; i++)   af_out[i] =  foo(af_in[i]); // Implicit non-parallelized l...
Assine o Fóruns