Intel® Clusters and HPC Technology

MPI_Finalize Error Present with mpiicpc.

I have been having trouble with the intel-compiled version of a scientific software stack.

The stack uses both OpenMP and MPI. When I started working on the code, it had been compiled with gcc & a gcc-compiled OpenMPI. Prior to adding any MPI code, the software compiles with icpc and runs without error.

The versions I am working with are: Intel compiler version 14.0.2, Intel mkl 11.1.2, and Intel MPI 4.1.3. I have tried turning up the debug level I_MPI_DEBUG to get more informative messages, but what I always end up with is:

trivial code fails sometimes under SGE: HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_ poll.c:70): assert (!(pollfds[i].rev

A trivial ring-passing .f90 program fails to start 50% of the time on our cluster (SGE 6.2u5). The same problem occurs with large codes:

The error message:

mpi performance/settings issue

Hi,

I am using intel mpi 4.1.3 with different process managers, mpd/hydra and got very different behavior

when i use mpd (mpiexec -perhost 32 -nolocal -n 384 -env I_MPI_FABRICS shm:dapl ./wrf.exe ) i've got all 32 cores on 100% on each nodes.  

when i use hydra (mpirun -perhost 32 -nolocal -n 384 -env I_MPI_FABRICS shm:dapl ./wrf.exe) i've got only 26 cores on 100% the rest are about 0% cpu time. and the performance decrease about two times.
What is the explanation? how to fix this?
 
Regards,

SK

Bug with -compile_info and -link_info options

Hi

When I run mpiicc -compile_info I get:

icc -I/softs/intel//impi/5.0.1.035/intel64/include -L/softs/intel//impi/5.0.1.035/intel64/lib/release -L/softs/intel//impi/5.0.1.035/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /softs/intel//impi/5.0.1.035/intel64/lib/release -Xlinker -rpath -Xlinker /softs/intel//impi/5.0.1.035/intel64/lib -Xlinker -rpath -Xlinker /opt/intel/mpi-rt/5.0/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/mpi-rt/5.0/intel64/lib -lmpifort -lmpi -lmpigi -ldl -lrt -lpthread

hybrid application on the Xeon Phi

I would like to run a hybrid application (CP2K) on the Xeon Phi. The application is MPI + OpenMP and I set up the environment in the following manner: $ export OMP_NUM_THREADS=15 $ export I_MPI_PIN_PROCESSOR_LIST=$(seq -s "," 1 $OMP_NUM_THREADS 240) $ echo $I_MPI_PIN_PROCESSOR_LIST 1,16,31,46,61,76,91,106,121,136,151,166,181,196,211,226 $ mpirun -n $(expr 240 / $OMP_NUM_THREADS) However, the application is running awfully slowly. When I run the "top" command only shows the 16 MPI processes and not any of the threads and says the Phi system is 6.2% user busy (16 / 240 * 100).

mpitune -V ERROR

I installed the Intel Parallel Studio Cluster 15. 

 

The following command "impi_5.0.1/intel64/bin/tune/mpitune -V" shows ERROR:

 

There is nothing like /p/pdsd/Intel_MPI/Software/Python/python-2.7.2-linux-intel64-rhel5.7/ in our environment. Is it a setup error?

 

tune/mpitune  -V

ERROR:root:code for hash md5 was not found.

Traceback (most recent call last):

  File "/p/pdsd/Intel_MPI/Software/Python/python-2.7.2-linux-intel64-rhel5.7/lib/python2.7/hashlib.py", line 139, in <module>

Tracing MPI call with Intel MPI

Hi,

Is there a way to trace MPI calls (meaning: each time an MPI function is called, I'd like to see the function name, parameters and emitting process) using Intel MPI ?

There is a --trace option, but it seems connected with Trace Analyzer which only seems able to print information regarding cumulated time spent in a given function ?

I've been trying to read the documentation for some time now, and it does not seems to be supported, which seems kind of strange...

 

Regards,

Can shared memory work between processes running w/wo mpiexec?

I have a GUI.exe (with GUI) and engine.exe (without GUI). I am using shared memory for inter-process communication between these two exe. Everything was working fine before I use mpiexec. After I added mpiexec for engine.exe, these two processes cannot talk to each other through shared memory. It seems that the shared memory is "shield" by mpiexe and cannot be shared by outside world, since mpiexec itself is also using shared memory for communication for parallel computations.

By the way, these two exe run on the same PC.

Is there any way to overcome this problem?

S’abonner à Intel® Clusters and HPC Technology