博客

How We Used Intel® MPI Library to Get Outstanding LINPACK Results on a Very Large System

Large clusters dominate the semi-annual list of the 500 fastest supercomputers in the world.

作者: 最后更新时间: 2019/07/06 - 17:00
博客

Mixing MPI and OpenMP*, Hugging Hardware and Dealing With It

This morning, I took a rare break, and attended a tutorial at Supercomputing.  I'm glad I did.

作者: James R. (Blackbelt) 最后更新时间: 2019/07/06 - 17:00
博客

Hybrid MPI and OpenMP* Model

In the High Performance Computing (HPC) area, parallel computing techniques such as MPI, OpenMP*, one-sided communications, shmem, and Fortran coarray are widely utilized. This blog is part of a series that will introduce the use of these techniques, especially how to use them on the Intel® Xeon Phi™ coprocessor. This first blog discusses the main usage of the hybrid MPI/OpenMP model.
作者: Nguyen, Loc Q (Intel) 最后更新时间: 2019/07/06 - 17:10
博客

MPI One-Sided Communication

In this continuation of the blog, Hybrid MPI and OpenMP* Model, I will di

作者: Nguyen, Loc Q (Intel) 最后更新时间: 2019/07/06 - 17:10
博客

Improving MPI Communication between the Intel® Xeon® Host and Intel® Xeon Phi™

MPI Symmetric Mode is widely used in systems equipped with Intel® Xeon Phi™ coprocessors.

作者: Nguyen, Loc Q (Intel) 最后更新时间: 2019/07/06 - 17:10
博客

Reducing Initialization Times of the Intel® MPI Library

Running large scale Intel® MPI applications on Omni-Path or InfiniBand* clusters, one might have recognized an increasing time spend within the MPI_Init() routine.

作者: Michael Steyer (Intel) 最后更新时间: 2019/07/04 - 10:33
博客

Reducing the Runtime of mpitune

The Intel® MPI Library includes a tool - mpitune - that can help to optimize the execution parameters of the Intel MPI Library itself.

作者: Michael Steyer (Intel) 最后更新时间: 2019/07/04 - 10:49
博客

Optimization of Classical Molecular Dynamics

CoMD is an open-source classical molecular dynamics code. One of its prime application areas is materials modeling.

作者: Andrey Vladimirov 最后更新时间: 2018/12/12 - 18:00