Intel® Clusters and HPC Technology

MPI 4.1.0.024

Hi

I've got MPI installed v4.1.0.024 and if I do run a test program "Hellow world" on one node with 12 cpu it does work, but if I run the program on two nodes (24cpu) it does not and I am getting message

Slowdown p2p MPI calls

Dear MPi users,

I'm using IntelMPI cs-2011. My code (OpenMP + MPI)  does for each time step some send and receive MPI calls after a kernel computation. MPI calls are used for ghost cell exchange. (few kbytes)

I've noted a significative slowdown during the computation. I suppose the problem is in some low level MPI setting because by using OpenMPI that problem disappear. I'm using Inifiniband and 12 cores on 1 node, so just intranode communication is used.

Using MPI in parallel OpenMP regions

Hi all,

I am trying to call MPI from within OpenMP regions, but I cannot have it working properly; my program compiles OK using mpiicc (4.1.1.036) and icc (13.1.2 20130514). I checked that it was linked against thread-safe libraries (libmpi_mt.so appears when I run ldd).

But when I try to run it (2 Ivybridge nodes x 2 MPI tasks x 12 OpenMP threads), I get a SIGSEGV without any backtrace :

/opt/softs/intel/impi/4.1.1.036/intel64/bin/mpirun -np 4 -ppn 2 ./mpitest.x

APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)

Hybrid OpenMP/MPI doesn't work with the Intel compiler

Greetings,

We provide the full Intel Cluster compiler suite of software on our cluster which uses Torque/Moab. Last week one of my users complained that his Hybrid OpenMP/MPI code wasn't running properly. The OpenMP portion was running great, but the MPI wasn't splitting the job up across nodes. So I dug into it a bit. Sure enough, the job launches on $X nodes but each node gets the full range of work and isn't split up.

Issue migrating to Intel MPI

I manage a legacy code that has been built with the Intel compiler along with the MPI/Pro library for years, but in the last couple of years we have been trying to convert from MPI/Pro to Intel MPI.  To date, we have tried to migrate 3 times using 3 different versions of Intel MPI and very time we have hit a different roadblock.  I am trying again and have hit yet another roadblock and I have run out of ideas as to how to resolve it.  The code appears to compile fine, but when I run it I get the following runtime error:

mpivars.sh

Hi,

I'm having some trouble running mpirun on my computer. The command mpirun -V returns the following:

/phys/sfw/intel/composer_xe_2013-09/composer_xe_2013_sp1.0.080/mpirt/bin/intel64/mpirun: 96: .: Can't open /phys/sfw/intel/composer_xe_2013-09/composer_xe_2013_sp1.0.080/mpirt/bin/intel64/mpivars.sh

Intermittently Cannot Connect To Local MPD

We are intermittently seeing this error message when running an MPI job with the latest MPI Run-Time Library V4:

/usr/diags/mpi/impi/4.1.1.036/bin64/mpiexec -genv LD_LIBRARY_PATH /usr/diags/mpi/impi/4.1.1.036/lib64 -machinefile /tmp/mymachlist.103060.run -n 32 /usr/diags/mpi/intel/intel/bin/olconft.intel RUNTIME=2
mpdroot: cannot connect to local mpd at: /tmp/mpd2.console_root
probable cause: no mpd daemon on this machine
possible cause: unix socket /tmp/mpd2.console_root has been removed
mpiexec_A00A6D99 (__init__ 1524): forked process failed; status=255

Intel Cluster Studio: How many people can run MPI jobs simultaneously on a cluster with 2 sts floating licence

I know that if i buy a Intel Cluster Studio with a two user floating licence,
only two people would be able to compile their MPI codes simultaneously.

What i want to know is,
1) How many users can execute their MPI jobs simultaneously?
2) If i have codes compiled with Intel C compiler from the same cluster studio, how many people can execute their simultaneously?

Thanks in advance.

Intel® Clusters and HPC Technology abonnieren