Intel® Cluster Studio XE

DAPL works but OFA not

Dear Intel colleagues,

I have just set up a new diskless cluster. Running IMB "Pingpong" with -genv I_MPI_FABRICS shm:dapl shows promising performance. But with -genv I_MPI_FABRICS shm:ofa things never worked. I have provided all system environment and execution traces below. Your help will be important to us.

Problems reading HDF5 files with IntelMPI?

Hi,

is anyone aware of troubles with PHDF5 and IntelMPI? A test code to
reads an HDF5 file in parallel has trouble when scaling if I run it with
IntelMPI, but no trouble if I run it, for example, with POE.

I'm using Intel compilers 13.0.1, IntelMPI 4.1.3.049, and HDF5 1.8.10

The code just reads a 800x800x800 HDF5 file, and the times I get for
reading it are:

128 procs  - 0.7262E+01
1024 procs - 0.9815E+01
1280 procs - 0.9930E+01
1600 procs - ???????  (it gest stalled...)

Intel MPI issue with the usage of Slurm

To whom it may concern,

Hello. We are using Slurm to manage our Cluster. However, we met a new issue of Intel MPI with Slurm. When one node reboots, the Intel MPI will fail with that node but manaully restart of slurm daemon will fix it. I also tried to add "service slurm restart" in /etc/rc.local which runs in the end of booting but the issue is still there.

[6] Assertion failed in file ../../segment.c

Hi, 

 

we have compiled our parallel code by using the latest Intel's software stack. We do use a lot of passive RMA one-sided PUT/GET operations along with a derived datatypes. Now we are expericincing problem that sometimes our application fails with either segmentation fault or with the following error message:

 

 

[6] Assertion failed in file ../../segment.c at line 669: cur_elmp->curcount >= 0

[6] internal ABORT - process 6

 

The Intel's inspector shows a problem inside the Intel MPI library:

-perhost not working with IntelMPI v.5.0.1.035

-perhost option does not work as expected with IntelMPI v.5.0.1.035, though it does work with  IntelMPI v.4.1.0.024:

 

$ qsub -I -lnodes=2:ppn=16:compute,walltime=0:15:00
qsub: waiting for job 5731.hpc-class.its.iastate.edu to start
qsub: job 5731.hpc-class.its.iastate.edu ready

$ mpirun -n 2 -perhost 1 uname -n
hpc-class-40.its.iastate.edu
hpc-class-40.its.iastate.edu

Subscribe to Intel® Cluster Studio XE