Developer Guide and Reference


Vectorization and Loops

This topic provides more information on the interaction between the auto-vectorizer and loops.

Interactions with Loop Parallelization

Combine the [Q]parallel and [Q]x options to instruct the Intel®
Compiler to attempt both Automatic Parallelization and automatic loop vectorization in the same compilation.
Using this option enables parallelization for both Intel® microprocessors and non-Intel microprocessors. The resulting executable may get additional performance gain on Intel® microprocessors than on non-Intel microprocessors.
The parallelization can also be affected by certain options, such as
), or
Using this option enables vectorization at default optimization levels for both Intel® microprocessors and non-Intel microprocessors. Vectorization may call library routines that can result in additional performance gain on Intel® microprocessors than on non-Intel microprocessors.
The vectorization can also be affected by certain options, such as
), or
In most cases, the compiler will consider outermost loops for parallelization and innermost loops for vectorization. If deemed profitable, however, the compiler may even apply loop parallelization and vectorization to the same loop.
In some rare cases, a successful loop parallelization
(either automatically or by means of OpenMP* directives)
may affect the messages reported by the compiler for a non-vectorizable loop in a non-intuitive way
; for example, in the cases where the
(Windows) or
) options indicate that loops were not successfully vectorized

Types of Vectorized Loops

For integer loops, the 128-bit Intel® Streaming SIMD Extensions (Intel® SSE) and the Intel® Advanced Vector Extensions (Intel® AVX) provide SIMD instructions for most arithmetic and logical operators on 32-bit, 16-bit, and 8-bit integer data types, with limited support for the 64-bit integer data type.
Vectorization may proceed if the final precision of integer wrap-around arithmetic is preserved. A 32-bit shift-right operator, for instance, is not vectorized in 16-bit mode if the final stored value is a 16-bit integer. Also, note that because the Intel® SSE and the Intel® AVX instruction sets are not fully orthogonal (shifts on byte operands, for instance, are not supported), not all integer operations can actually be vectorized.
For loops that operate on 32-bit single-precision and 64-bit double-precision floating-point numbers, Intel® SSE provides SIMD instructions for the following arithmetic operators:
  • addition (+)
  • subtraction (-)
  • multiplication (*)
  • division (/)
Additionally, Intel® SSE provide SIMD instructions for the binary
and unary
operators. SIMD versions of several other mathematical operators (like the trigonometric functions
, and
) are supported in software in a vector mathematical run-time library that is provided with the Intel®
To be vectorizable, loops must be:
  • Countable:
    The loop trip count must be known at entry to the loop at runtime, though it need not be known at compile time (that is, the trip count can be a variable but the variable must remain constant for the duration of the loop). This implies that exit from the loop must not be data-dependent.
  • Single entry and single exit:
    as is implied by stating that the loop must be countable.
    Consider the following example of a loop that is not vectorizable, due to a second, data-dependent exit:
    Example 1: Non-vectorizable Loop
    void no_vec(float a[], float b[], float c[]){   int i = 0.;   while (i < 100) {     a[i] = b[i] * c[i];     // this is a data-dependent exit condition:     if (a[i] < 0.0)     break;     ++i;   } }
    > icc -c -O2 -qopt-report=2 -qopt-report-phase=vec two_exits.cpp two_exits.cpp(4) (col. 9): remark: loop was not vectorized: nonstandard loop is not a vectorization candidate.
  • Contain straight-line code:
    SIMD instruction perform the same operation on data elements from multiple iterations of the original loop, therefore, it is not possible for different iterations to have different control flow; that is, they must not branch. It follows that
    statements are not allowed. However,
    statements are allowed if they can be implemented as masked assignments, which is usually the case. The calculation is performed for all data elements but the result is stored only for those elements for which the mask evaluates to true.
    To illustrate this point, consider the following example that may be vectorized:
    Example 2: Evaluation of a Vectorizable Loop
    #include <math.h> void quad(int length, float *a, float *b, float *c, float *restrict x1, float *restrict x2) { for (int i=0; i<length; i++) {   float s = b[i]*b[i] - 4*a[i]*c[i];    if ( s >= 0 ) {      s = sqrt(s) ;     x2[i] = (-b[i]+s)/(2.*a[i]);      x1[i] = (-b[i]-s)/(2.*a[i]);    } else {      x2[i] = 0.;      x1[i] = 0.;    } } }
    > icc -c -restrict -qopt-report=2 -qopt-report-phase=vec quad.cpp quad5.cpp(5) (col. 3): remark: LOOP WAS VECTORIZED.
  • Innermost loop of a nest:
    The only exception is if an original outer loop is transformed into an inner loop as a result of some other prior optimization phase, such as unrolling, loop collapsing or interchange,
    or an original outermost loop is transformed to an innermost loop due to loop materialization
  • Without function calls:
    Even a
    statement is sufficient to prevent a loop from getting vectorized. The vectorization report message is typically:
    non-standard loop is not a vectorization candidate
    . The two major exceptions are for intrinsic math functions and for functions that may be inlined.
Intrinsic math functions are allowed, because the compiler runtime library contains vectorized versions of these functions. See the table below for a list of these functions; most exist in both float and double versions.