Intel® Cluster Ready

Problems reading HDF5 files with IntelMPI?


is anyone aware of troubles with PHDF5 and IntelMPI? A test code to
reads an HDF5 file in parallel has trouble when scaling if I run it with
IntelMPI, but no trouble if I run it, for example, with POE.

I'm using Intel compilers 13.0.1, IntelMPI, and HDF5 1.8.10

The code just reads a 800x800x800 HDF5 file, and the times I get for
reading it are:

128 procs  - 0.7262E+01
1024 procs - 0.9815E+01
1280 procs - 0.9930E+01
1600 procs - ???????  (it gest stalled...)

Intel MPI issue with the usage of Slurm

To whom it may concern,

Hello. We are using Slurm to manage our Cluster. However, we met a new issue of Intel MPI with Slurm. When one node reboots, the Intel MPI will fail with that node but manaully restart of slurm daemon will fix it. I also tried to add "service slurm restart" in /etc/rc.local which runs in the end of booting but the issue is still there.

[6] Assertion failed in file ../../segment.c



we have compiled our parallel code by using the latest Intel's software stack. We do use a lot of passive RMA one-sided PUT/GET operations along with a derived datatypes. Now we are expericincing problem that sometimes our application fails with either segmentation fault or with the following error message:



[6] Assertion failed in file ../../segment.c at line 669: cur_elmp->curcount >= 0

[6] internal ABORT - process 6


The Intel's inspector shows a problem inside the Intel MPI library:

-perhost not working with IntelMPI v.

-perhost option does not work as expected with IntelMPI v., though it does work with  IntelMPI v.


$ qsub -I -lnodes=2:ppn=16:compute,walltime=0:15:00
qsub: waiting for job to start
qsub: job ready

$ mpirun -n 2 -perhost 1 uname -n

Intel MPI


I have  two questions about Intel MPI on Micorsoft Windows 7 64bit.

The first one concerning about Intel MPI If I execute

C:\Program Files (x86)\Intel\MPI-RT\\em64t\bin\smpd -version

I get "3.1". If I execute

C:\Program Files (x86)\Intel\MPI-RT\\em64t\bin\smpd -version

I get "4.1.3"

This is a problem for us, because our product is compiled with Intel MPI and needs smpd version 4.1.3.

cpuinfo output from system call different


I'm using Intel MPI 5.0 and am making a system call inside my fortran program and it returns different values depending on the env. variable I_MPI_PIN_DOMAIN. Why is that? How do I make it give consistent output?

Sample Fortran (Intel Fortran 13.1) program that can reproduce this:

        Program tester

        call system("cpuinfo|grep 'Packages(sockets)'|
     &                          tr -d ' '|cut -d ':' -f 2")



$ mpirun -genv I_MPI_PIN_DOMAIN node -np 1 ./a.out


Intel MPI Runtime For Windows installer hang

We are seeing a couple of computers (all running Win 7) that are hanging in installation.  We ran in installer with the log argument and this is what it shows.  We see this with a couple different versions (4.0 and 4.1).  Any ideas?  There aren't any windows waiting for input.  Done with admin user - and tried using command prompt as Administrator - same issue.


How do I configure Visual Studio for Intel MPI?

I'm using Visual Studio 2013 and IncrediBuild 5.5 to build apps using Intel Parallel Studio. I've recently downloaded Intel MPI and found that it essentially has a compiler wrapper (mpiicpc) that I need to use to compile apps (and, correspondingly, run them with mpiexec).

My question is this: how do I configure Visual Studio 2013 so that mpiicpc is used instead of icpc for compilation?



订阅 Intel® Cluster Ready