Intel® Clusters and HPC Technology

Run MPI job on LSF for Windows

When I ran a MPI job on Linux using LSF, I just use bsub to submit the following script file and 

#BSUB -n 8
#BSUB -R "OSNAME==Linux && ( SPEED>=2500 ) && ( OSREL==EE60 || OSREL==EE58 || OSREL==EE63 ) &&
SFIARCH==OPT64 && mem>=32000"
#BSUB -q lnx64
#BSUB -W 1:40
cd my_working_directory
mpirun  mympi

The system will start 8 mympi jobs.  I don't need to specify machine names in the mpirun command line. 

Intel MPI gives wrong number of physical cores on core i7 Q820?


I have begun learning MPI on my Dell 4500 with a Core i7 Q820 processor (4 physical and 8 logical cores).

When I run a simple program in Fortran to get the rank and size, i get 0 and 1 instead of 0 and 3 (see attached code).

What is wrong?

Best regards

Anders S

Fault Tolerance Question

Hello there,

I am trying to do some experiments with fault tolerance on MPI with FORTRAN, but I'm having troubles. I am calling the routine


which seems to work more or less. After calling, for instance, MPI_SENDRECV, the variable STATUS does not report any error, i.e. STATUS(MPI_ERROR) is always zero. The ierr integer may be nonzero though, and that's what I've been trying to catch instead.

MODULEFILE creation the easy way

If you use Environment Modules  (from Sourceforge, SGI, Cray, etc) to setup and control your shell environment variables, we've created a new article on how to quickly and correctly create a modulefile.  The technique is fast and produces a correct modulefile for any Intel Developer Products tool.

The article is here:

How to increase performance of MPI one-sided performance with Intel MPI?


we have an application with basically two (last) sequence of actions in the domain decomposition:

one set of tasks (subset a) calls

call mpi_win_lock(some_rank_from_subset_b)
call mpi_win_get(some_rank_from_subset_b)
call mpi_win_unlock(some_rank_from_subset_b)

the others (subset b) are stuck in the MPI_Barrier at the end of the domain decomposition. This performs nicely (passes domain decomposition within seconds) with MVAPICH on our new Intel Xeon machine and on another machine with IBM BlueGene/Q hardware.

Intel MPI suddenly exited


I installed intel MPI on windows 7 x64 and executed "> mpiexec -n 4 program.exe", it seemed to be running fine for about 30h and was using the appropriate resources expected. However, the process suddenly exited with about 10h remaining in the computation with the following error stack:


[mpiexec@Simulation-PC] ..\hydra\pm\pmiserv_cb.c (773): connection to proxy 0 at host Simulation-PC failed


I have two fortran mpi programs (driver.f90 and hello.f90, both attached here) 

driver.f90 contains a call to MPI_COMM_SPAWN which calls hello.x

When I run it using  the command "mpirun -np 2 ./driver.x" It crashes (output below this message). I noticed that spawned task 0 has a different parent communicator than the other tasks. I imagine that is the cause of the segmentation fault. It seems a very simple mpi program, both OpenMPI and MPICH work fine. Does anybody know what the problem might be

I'm using impi/ and ifort 15.0.3.



nested mpirun commands


   I have an mpi program that calls another mpi program (written by someone else) using a fortran system call



call MPI_INIT(..)

system('mpirun -np 2 ./mpi_prog.x')

call MPI_FINALIZE(...)


When I run it (e.g. mpirun -np 4 ./driver.x) it crashes inside mpi_prog.x (at a barrier). When I build it using mpich it works fine though. Any hints on what might be wrong (I realize nested mpirun's are completely beyond the mpi standard and highly dependent on implementation)


NOTE: when I do something like:

Error while building NAS benchmarks using Intel MPI

I am trying to build NAS benchmarks using Intel MPI and below is the makefile that I am using.









S’abonner à Intel® Clusters and HPC Technology