Intel® Cluster Studio XE

MPI Library Runtime Enviroment 4.0


I am working by using remote deskpot of Cornell University servers and I have not internet conection in my deskpot. I am using Visual Studios 2008 with Intel Visual Fortran Composer XE 2011, and supposedly it has already installed MPI Library Runtime Enviroment 4.0

I can´t find the files msmpi.lib or impi.lib, or the include path. Nevertheless, I found the folder with other files like mpichi2mpi.dll, impi.dll, impimt.dll,mpiexec.ex, wmpiexec.exe, etc. The package ID in the support file is w_mpi_rt_p_4.0.1.007 listed   

how to run the hybrid MPI/OpenMP

Hi, Dear Sir or Madam,

I am using the Intel MPI and OpenMP and Intel Composer XE 2015 to build a hybrid MPI/OpenMP application. For example, if I want to run the executable file of my application on 3 SMP computers with 3 MPI processes and each MPI process consists of 16 OpenMP threads.  Our PC cluster has 3 SMP nodes connected by the Infiniband and each node has 16 cores.  

Error message: control_cb (./pm/pmiserv/pmiserv_cb.c:1151): assert (!closed) failed

Hello, I have the following error message when I run my FORTRAN code on a HPC of my university:

[mpiexec@node0653] control_cb (./pm/pmiserv/pmiserv_cb.c:1151): assert
(!closed) failed

I had my code attached. I can successfully compile my codes in debug mode without any error. Besides, I have already unblocked the stack size of my machine by adding in command line "ulimit -a unlimited." 

Problem on MPI: About Non-Blocking Collective operations


The structure of my code is,

         MPI_Allgatherv();  //Replaced by MPI_Iallgatherv();

Collective operations in part 2 is the bottleneck of this program.

Improving MPI Communication between the Intel® Xeon® Host and Intel® Xeon Phi™

MPI Symmetric Mode is widely used in systems equipped with Intel® Xeon Phi™ coprocessors. In a system where one or more coprocessors are installed on an Intel® Xeon® host, Transmission Control Protocol (TCP) is used for MPI messages sent between the host and coprocessors or between coprocessors on that same host.  For some critical applications this MPI communication may not be fast enough.

订阅 Intel® Cluster Studio XE