Improve memory utilization by manipulating data-structure layout. For certain algorithms, like 3D transformations and lighting, there are two basic ways of arranging the vertex data. The traditional method is the array of structures (AoS) arrangement, with a structure for each vertex, as shown below:
This method does not take full advantage of the SIMD technology capabilities.
Arrange the data in an array for each coordinate, taking advantage the structure of arrays (SoA) processing method. SoA data structure is shown here:
There are two options for computing data in AoS format: perform operation on the data as it stands in AoS format or re-arrange it (swizzle it) into SoA format dynamically. The following code samples show each option, based on a dot-product computation:
Performing SIMD operations on the original AoS format can require more calculations and some of the operations do not take advantage of all of the SIMD elements available. Therefore, this option is generally less efficient.
The recommended way for computing data in AoS format is to swizzle each set of elements to SoA format before processing it using SIMD technologies. This swizzling can either be done dynamically during program execution or statically when the data structures are generated. See Chapters 4 and 5 of the Intel® 64 and IA-32 Architectures Optimization Reference Manual for specific examples of swizzling code. Performing the swizzle dynamically is usually better than using AoS, but is somewhat inefficient as there is the overhead of extra instructions during computation. Performing the swizzle statically, when the data structures are being laid out, is best, as there is no runtime overhead.
As mentioned earlier, the SoA arrangement allows more efficient use of the parallelism of the SIMD technologies because the data is ready for computation in a more optimal vertical manner: multiplying components x0,x1,x2,x3 by xF,xF,xF,xF using four SIMD execution slots to produce four unique results. In contrast, computing directly on AoS data can lead to horizontal operations that consume SIMD execution slots but produce only a single scalar result, as shown by the many “don’t-care” (DC) slots in the previous code sample.
Use of the SoA format for data structures can also lead to more efficient use of caches and bandwidth. When the elements of the structure are not accessed with equal frequency – such as when elements x, y, z are accessed ten times more often than the other entries – then SoA not only saves memory, but it also prevents fetching unnecessary data items a, b, c.
Note that SoA can have the disadvantage of requiring more independent memory-stream references. A computation that uses arrays x, y, and z in the first code sample on the Solution section would require three separate data streams. This can require the use of more prefetches and additional-address generation calculations, as well as having a greater impact on page-access efficiency. An alternative, hybrid SoA approach blends the two alternatives:
In this case, only two separate address streams are generated and referenced: one which contains xxxx,yyyy,zzzz,zzzz,... and the other which contains aaaa,bbbb,cccc,aaaa,dddd,... This also prevents fetching unnecessary data, assuming the variables x, y, z are always used together, whereas the variables a, b, c would also used together, but not at the same time as x, y, z. This hybrid SoA approach ensures the following:
With the advent of the SIMD technologies, the choice of data organization becomes more important and should be carefully based on the operations to be performed on the data. In some applications, traditional data arrangements may not lead to the maximum performance. Application developers are encouraged to explore different data arrangements and data segmentation policies for efficient computation. This may mean using a combination of AoS, SoA, and Hybrid SoA in a given application.
The following items are related to this one:
See our Get Help page for your support options.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804