Intel® Cluster Studio

Run Intel MPI without mpirun/mpiexec

Hi,

I am wondering does Intel MPI support a MPI run without mpirun/mpiexec in the command line?

I know that in MPI-2 standard, it supports the “dynamic process” feature, i.e., dynamically generate/spawn processes from existing MPI process.

What I am trying to do here is 1) Firstly, launch a singleton MPI process without mpirun/mpiexec in the command line; 2) Secondly, use MPI_Comm_spawn to spawn a set of process on the different host machines.

What/where is DAPL provider libdaplomcm.so.2 ?

DAPL providers ucm, scm are frequently mentioned, but what is libdaplomcm.so.2?

Could someone point me to a description of the use case for the DAPL provider libdaplomcm.so.2?

I am currently using the Intel MPI Library 4.1 for Linux with Mellanox OFED 2.1; shm:dapl and shm:ofa both seem to work, but with shm:dapl I get warning messages about not being able to find libdaplomcm.so.2. Mellanox DAPL does have this file. This file does not appear in DAPL 2.0.41 either: http://www.openfabrics.org/downloads/dapl/

Memory Leak detected by Inspector XE in MPI internal buffer

I am interested in finding out if there is a way to configure Intel's MPI libraries to alter what the threshold is for the creation of internal buffers so I can verify the source of a memory leak detected by Inspector XE.

Please refer to my post in Intel's Inspector XE forum, which includes a simple Fortran program that demonstrates the issue:

http://software.intel.com/en-us/forums/topic/508656

Segfault in DAPL with Mellanox OFED 2.1

Hi,

We're having a problem with the Intel MPI library crashing since we've updated to the latest Mellanox OFED 2.1. For example, the test program supplied with Intel MPI (test/test.f90) crashes with a segfault. I compiled it using

mpif90 -debug all /apps/intel-mpi/4.1.1.036/test/test.f90 -o test.x

and managed to get a back trace from the crash using idbc:

Subscribe to Intel® Cluster Studio