Forum topic

How to build connection for two server with infiniband and use intel mpi?

I'm so sorry that i can't find some detail info about using intel mpi to connect two server with infiniband.

I want to know the procedure, and if there is any url about this content?

Authored by quan w. Last updated on 05/25/2017 - 07:19
Forum topic

MPI_Scatterv/ Gatherv using C++ with "large" 2D matrices throws MPI errors

I implemented some `MPI_Scatterv` and `MPI_Gatherv` routines for a parallel matrix matrix multiplication. Everything works fine for small matrix sizes up to N = 180, if I exceed this size, e.g.

Authored by Jonas H. Last updated on 05/24/2017 - 06:22
Blog post

Intel® Parallel Studio XE 2016: High Performance for HPC Applications and Big Data Analytics

Intel® Parallel Studio XE 2016, launched on August 25, 2015, is the latest installment in Intel's developer toolkit for high performance computing (HPC) and technical computing applications. This suite of compilers, libraries, debugging facilities, and analysis tools, targets Intel® architecture, including support for the latest Intel® Xeon® processors (codenamed Skylake) and Intel® Xeon Phi™...
Authored by James R. Last updated on 05/18/2017 - 13:18
Forum topic

error in compiling FFTW3 with intel compiler

Dear all,

I'm trying to build FFTW3 with intel compiler, according to the guide in FFTW website. I configure FFTW3 as

./configure CC=icc F77=ifort MPICC=mpiicc --enable-mpi

Authored by Jingming S. Last updated on 05/16/2017 - 06:38

How to use the MPI-3 Shared Memory in Intel® Xeon Phi™ Processors

Code Sample included: Learn how to use MPI-3 shared memory feature using the corresponding APIs on the Intel® Xeon Phi™ processor.
Authored by Nguyen, Loc Q (Intel) Last updated on 05/15/2017 - 10:31
Forum topic

Using intel mpi in parallel ANSYS fluent with AMD processors

I set up and use successfully this tutorial ( for clustering 2 machine to run the ansys fluent 17.2 in parallel mode.

Authored by milad m. Last updated on 05/14/2017 - 11:41

如何在英特尔® 至强融核™ 处理器中使用 MPI-3 共享内存

学习如何在英特尔® 至强融核™ 处理器中使用 MPI-3 共享内存
Authored by Nguyen, Loc Q (Intel) Last updated on 05/12/2017 - 00:30
Blog post

Optimization of Classical Molecular Dynamics

CoMD is an open-source classical molecular dynamics code. One of its prime application areas is materials modeling.

Authored by Andrey Vladimirov Last updated on 05/10/2017 - 14:54

Getting Started with Intel® MPI Benchmarks 2017

Intel® MPI Benchmarks 2017
Authored by Gergana S. (Intel) Last updated on 05/10/2017 - 12:19

Intel® MPI Library 2017 Update 3 Readme

The Intel® MPI Library is a high-performance interconnect-independent multi-fabric library implementation of the industry-standard Message Passing Interface, v3.1 (MPI-3.1) specification.  This pac

Authored by Gergana S. (Intel) Last updated on 05/10/2017 - 12:19
For more complete information about compiler optimizations, see our Optimization Notice.