Part 9: Distributed-Memory Parallelism and MPI

Overview

In the previous episodes of this chapter, we learned how to use vectorization to parallelize calculations across vector lanes in each core. Then we talked about how to use OpenMP* to scale applications across cores in each processor or coprocessor. Now, in this final episode 9 of this chapter, we will study the next level of parallelism: scaling across multiple compute devices and multiple compute nodes in a cluster environment.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804