Function Annotations and the SIMD Directive for Vectorization
This topic presents specific
C++language features that better help to vectorize code.
The SIMD vectorization feature is available for both Intel® microprocessors and non-Intel microprocessors. Vectorization may call library routines that can result in additional performance gain on Intel® microprocessors than on non-Intel microprocessors.
The vectorization can also be affected by certain options, such as
enables you to overcome hardware alignment constraints. The auto-vectorization hints address the stylistic issues due to lexical scope, data dependency, and ambiguity resolution. The SIMD feature's
pragmaallows you to enforce vectorization of loops.
You can use the
declarationsto vectorize user-defined functions and loops. For SIMD usage,
theis called from a loop that is being vectorized.
The C/C++ extensions for array notations
mapoperations can be defined to provide general data parallel semantics, where you do not express the implementation strategy. Using array notations, you can write the same operation regardless of the size of the problem, and let the implementation use the right construct, combining SIMD, loops, and tasking to implement the operation. With these semantics, you can choose more elaborate programming and express a single dimensional operation at two levels, using both task constructs and array operations to force a preferred parallel and vector execution.
The usage model of the
is that the code generated for the function actually takes a small section (
vectorlength) of the array, by value, and exploits SIMD parallelism, whereas the implementation of task parallelism is done at the call site.
The following table summarizes the language features that help vectorize code.