Intel® Clusters and HPC Technology

Problem with IntelIB-Basic, Intel MPI and I_MPI_FABRICS=tmi


We have a small cluster (head node + 4 nodes with 16 cores) using Intel infiniband. This cluster is under CentOS 6.6 (with the kernel of CentOS 6.5).

On this cluster Intel Parallel Studio XE 2015 is installed. I_MPI_FABRICS is set per default to "tmi" only.

When I start a job (using torque+maui) on several nodes, for example this one:

mpiexec starts multiple processes of same rank 0

Hello everybody

I am using the new Intel Composer XE 2015 which comes with a version of Intel MPI. I have been using OPENMPI before and compiled with g++ and my programs ran fine. Now with the new Intel Compiler I am able to compile and run the application but if I execute for example mpiexec -n 12 test_program the programm is executed 12 times but all the time with rank 0 and the processes do not communicate and assume they are on their own. So i assume something is wrong with the mpiexec script or maybe some message passing daemon. Thanks in advance for your answers


IBM 4.0 and data-check

The data-check compile-time option seems poorly debugged.

To reproduce:

1. Compile IMB-MPI1 with data-check enabled (-DCHECK)

2. Create a msg_lengths file (for L in `seq 0 100`; do echo $L >> msg_len.txt; done)

3. Run with your favorite MPI implementation using two processes, the simples possible way, with the following arguments to IMB-MPI1: 

   -msglen msg_len.txt -iter 1 Exchange

and terrible things happens.

For example, with Open MPI and the command line:

Linking with mpiicc (impi 5.0.1)

I have been trying to configure (and compile) the PETSc library with impi 5.0.1 (and ifort Version Build 20140723) using mpiicc script for C compilation. However, the configure process fails with the error "...compiler mpiicc is broken! It is returning a zero error when the linking failed..."

I think that there might be an issue with the following code snippet located at the end of the mpiicc script:

mpiexec hang after program exit

Out product includes GUI and engine parts. Intel MPI  4.1 was used in engine code. GUI will can engine through mpiexec. Everything works fine on Windows 7 and Windows server 2008. When we run the product on Windows 8 and Windows server 2012, we met some problem. 

The code to start engine looks like CreateProcess( NULL, "mpiexec - n 2 engine", NULL, ........) . After engine exit, mpiexec still hang in memory and cannot exit. This will cause GUI hang. If we run the engine form command, mpiexec can exit after engine exit. 

Tuning Intel MPI for Phi

Does setting


change other MPI environment variables, particularly any that would tune MPI for the MIC system architecture?  

As a side question, has anyone written a Tuning and Tweaking guide for IMPI for Phi?  For example, what I_MPI variables could one use to help tune an app targeting 480 ranks across 8 Phis?



IFORT constant numbers


I have frequently heard that in C++ HPC xeon phi applications, it is beneficial to declare variables as const, if possible. However, I cannot seem to ascertain whether or not this is possible in Fortran. Is there a way to do this type of optimization using ifort



Fatal error in MPI_Init: Other MPI error, error stack: MPIR_Init_thread(264): Initialization failed


I am running Intel MPI for Intel mp_linpack benchmark (xhpl_em64t).


1. I sourced the from /opt/intel/impi/bin64/

2. I did "mpdboot -f hostfile"

$ cat hostfile
node 1
node 2

3. I did "mpirun -f hostfile -ppn 1 -np 2 ./xhpl_em64t"

After step 3, errors occured. Below is the error message with I_MPI_DEBUG=50

Subscribe to Intel® Clusters and HPC Technology