Intel® Clusters and HPC Technology

Trying to make MPI work in a QuickWin application


This is my first try to enter MPI into a simple QuickWin program under Visual Studio 2010

I first installed Cluster Studio 2015 update 4 on my i7 4910MQ Dell laptop as an x64 project. The simple code compiled, linked and run directly.

Then I just added INCLUDE *mpif.h' on a new line after the IMPLICIT NONE line and got the compiler message error #5102 cannot open include file


I then tried a number of ways to inform the compiler about the location of the mpif.h file but instead got a number of warnings and errors. 

Parallel Universe link slightly mangled

I don't know exactly where to submit this, but one of the links on the Parallel Universe magazine page is mangled.  On the page the displayed Issue 19 actually points to issue 20.  Maybe someone at Intel would like to fix this.  (Of course you can still get issue 19 if you really want to, but there is an annoyance factor.)

Problem with Intel Trace Collector

I'm trying to use for the first time the intel trace collector on a cluster machine (with the intel xe 2013 and the itac

I built my program in the standard production mode and in the bash script submitted to the PBS scheduler there were the following commands:

#PBS -l select=14:ncpus=16:mem=120gb:mpiprocs=16

module load intel/cs-xe-2013

source [path]/cs-xe-2013/none/itac/

mpirun -trace [path]/my_program [arguments]

Deadlock with MPI_Win_fence going from Intel MPI to

We encountered a problem when migrating a code from Intel MPI to The code in question is a complex simulation that first reads global input state from disk into several parts in memory and then accesses this memory in a hard to predict fashion to create a new decomposition. We use active target RMA for this (on machines which support this like BG/Q we also use passive target) since a rank might need data from the part that is at another rank to form its halo.

[UPDATED] : Maximum MPI Buffer Dimension


there is a maximum dimension in MPI buffer size? I have a buffer dimension problem with my MPI code when trying to MPI_Pack large arrays. The offending instruction is the first pack call:


where the double precision array R has LVB=6331625 elements, BUF = 354571000, and LBUF = BUF*8 = 2836568000 (since I have to send other 6 arrays with the same dimension as R).

The error output is the following:

MPI_Recv block a long time

    I get into trouble when use MPI_Recv in my programmes. 
    My programme start 3 subprocess,and bind them to cpu 1-3 respectively. In each subprocess, first disabled interrupts , then send message to other process and receive from others. Repeat it a billion times.
    I except that MPI_Recv will return in a fixed times ,and not use MPI_irecv instead.
    In order to do that, i disabled interrupts and cancel ticks on cpu1-3,remove other process from cpu 1-3 to cpu 0,and bind interrupts to cpu0.

Run MPI job on LSF for Windows

When I ran a MPI job on Linux using LSF, I just use bsub to submit the following script file and 

#BSUB -n 8
#BSUB -R "OSNAME==Linux && ( SPEED>=2500 ) && ( OSREL==EE60 || OSREL==EE58 || OSREL==EE63 ) &&
SFIARCH==OPT64 && mem>=32000"
#BSUB -q lnx64
#BSUB -W 1:40
cd my_working_directory
mpirun  mympi

The system will start 8 mympi jobs.  I don't need to specify machine names in the mpirun command line. 

Iscriversi a Intel® Clusters and HPC Technology