There are two principal methods of parallel computing: distributed memory computing and shared memory computing. As more processor cores are dedicated to large clusters solving scientific and engineering problems, hybrid programming techniques combining the best of distributed and shared memory programs are becoming more popular.
Programming models and techniques for distributed memory and clusters.
Objectif : apprendre à utiliser les extensions à MPI-1 introduites dans la seconde version de la norme MPI.
For more complete information about compiler optimizations, see our Optimization Notice.