Intel® Cluster Studio XE

MODULEFILE creation the easy way

If you use Environment Modules  (from Sourceforge, SGI, Cray, etc) to setup and control your shell environment variables, we've created a new article on how to quickly and correctly create a modulefile.  The technique is fast and produces a correct modulefile for any Intel Developer Products tool.

The article is here:

How to increase performance of MPI one-sided performance with Intel MPI?


we have an application with basically two (last) sequence of actions in the domain decomposition:

one set of tasks (subset a) calls

call mpi_win_lock(some_rank_from_subset_b)
call mpi_win_get(some_rank_from_subset_b)
call mpi_win_unlock(some_rank_from_subset_b)

the others (subset b) are stuck in the MPI_Barrier at the end of the domain decomposition. This performs nicely (passes domain decomposition within seconds) with MVAPICH on our new Intel Xeon machine and on another machine with IBM BlueGene/Q hardware.

Intel MPI suddenly exited


I installed intel MPI on windows 7 x64 and executed "> mpiexec -n 4 program.exe", it seemed to be running fine for about 30h and was using the appropriate resources expected. However, the process suddenly exited with about 10h remaining in the computation with the following error stack:


[mpiexec@Simulation-PC] ..\hydra\pm\pmiserv_cb.c (773): connection to proxy 0 at host Simulation-PC failed


I have two fortran mpi programs (driver.f90 and hello.f90, both attached here) 

driver.f90 contains a call to MPI_COMM_SPAWN which calls hello.x

When I run it using  the command "mpirun -np 2 ./driver.x" It crashes (output below this message). I noticed that spawned task 0 has a different parent communicator than the other tasks. I imagine that is the cause of the segmentation fault. It seems a very simple mpi program, both OpenMPI and MPICH work fine. Does anybody know what the problem might be

I'm using impi/ and ifort 15.0.3.



nested mpirun commands


   I have an mpi program that calls another mpi program (written by someone else) using a fortran system call



call MPI_INIT(..)

system('mpirun -np 2 ./mpi_prog.x')

call MPI_FINALIZE(...)


When I run it (e.g. mpirun -np 4 ./driver.x) it crashes inside mpi_prog.x (at a barrier). When I build it using mpich it works fine though. Any hints on what might be wrong (I realize nested mpirun's are completely beyond the mpi standard and highly dependent on implementation)


NOTE: when I do something like:

Intel® Parallel Studio XE 2015 Update 4 Cluster Edition Readme

The Intel® Parallel Studio XE 2015 Update 4 Cluster Edition for Windows* combines all Intel® Parallel Studio XE and Intel® Cluster Tools into a single package. This multi-component software toolkit contains the core libraries and tools to efficiently develop, optimize, run, and distribute parallel applications for clusters with Intel processors.  This package is for cluster users who develop on and build for IA-32 and Intel® 64 architectures on Windows*. It contains:

  • 开发人员
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8.x
  • C/C++
  • Fortran
  • 英特尔® Parallel Studio XE Cluster Edition
  • 消息传递接口
  • 集群计算
  • Error while building NAS benchmarks using Intel MPI

    I am trying to build NAS benchmarks using Intel MPI and below is the makefile that I am using.









    Problem with Intel MPI on >1023 processes

    I have been testing code using Intel MPI (version 4.1.3  build 20140226) and the Intel compiler (version 15.0.1 build 20141023) with 1024 or more total processes. When we attempt to run on 1024 or more processes we receive the following error: 

    MPI startup(): ofa fabric is not available and fallback fabric is not enabled 

    Anything less than 1024 processes does not produce this error, and I also do not receive this error with 1024 processes using OpenMPI and GCC.

    订阅 Intel® Cluster Studio XE