Article

Hybrid applications: Intel MPI Library and OpenMP*

Tips and tricks on how to get the optimal performance settings for your mixed Intel MPI/OpenMP applications.
Authored by Gergana S. (Blackbelt) Last updated on 07/06/2019 - 19:20
Blog post

OpenMP* 4.0 may offer important solutions for targeting and vectorization

The upcoming OpenMP 4.0 will be discussed at SC12, and there wil

Authored by James R. (Blackbelt) Last updated on 05/28/2018 - 18:28
Article

Books - Message Passing Interface (MPI)

This article looks at several books that introduce developers to the topics of Message Passing Interface (MPI), parallel programming, and OpenMP*.
Authored by Mike P. (Intel) Last updated on 12/12/2018 - 18:00
Article

Process and Thread Affinity for Intel® Xeon Phi™ Processors

The Intel® MPI Library and OpenMP* runtime libraries can create affinities between processes or threads, and hardware resources. This affinity keeps an MPI process or OpenMP thread from migrating to a different hardware resource, which can have a dramatic effect on the execution speed of a program.
Authored by Gregg S. (Intel) Last updated on 07/29/2019 - 08:05
Article

Scale-Up Implementation of a Transportation Network Using Ant Colony Optimization (ACO)

In this article an OpenMP* based implementation of the Ant Colony Optimization algorithm was analyzed for bottlenecks with Intel® VTune™ Amplifier XE 2016 together with improvements using hybrid MPI-OpenMP and Intel® Threading Building Blocks were introduced to achieve efficient scaling across a four-socket Intel® Xeon® processor E7-8890 v4 processor-based system.
Authored by Sunny G. (Intel) Last updated on 07/05/2019 - 19:10
File Wrapper

Parallel Universe Magazine - Issue 24, March 2016

Authored by admin Last updated on 12/12/2018 - 18:08
Article

Missing lsb dependency when installing Intel® Cluster Runtimes on SLES* 12

How to resolve a missing lsb package on SLES* 12.
Authored by Jeremy Siadal (Intel) Last updated on 07/06/2019 - 11:34
Article

Hybrid Parallelism: Parallel Distributed Memory and Shared Memory Computing

There are two principal methods of parallel computing: distributed memory computing and shared memory computing. As more processor cores are dedicated to large clusters solving scientific and engineering problems, hybrid programming techniques combining the best of distributed and shared memory programs are becoming more popular.
Authored by David M. Last updated on 07/12/2019 - 08:31
Article

An Introduction to MPI-3 Shared Memory Programming

In this article, we present a tutorial on how to start using MPI SHM on multinode systems using Intel® Xeon® and Intel® Xeon Phi™ processors. The article uses a 1-D ring application as an example and includes code snippets to describe how to transform common MPI send/receive patterns to utilize the MPI SHM interface. The MPI functions that are necessary for internode and intranode communications...
Authored by Last updated on 07/27/2018 - 08:58
Article

Choosing the right threading framework

This is the second article in a series of articles about High Performance Computing with the Intel Xeon Phi.

Authored by Last updated on 07/06/2019 - 16:30