Intel® Clusters and HPC Technology

MPI collect data from process in array

Hi,

I am trying to use mpi to split computations on n process on a array.

So say process have computed the values for their ranges for eg:



process 1: Array[1 to a1]

process 2: Array[a1+1 to a2]

process 3: Array[a2+1 to a3]

process 4: Array[a3+1 to a4]



What is the best way to send and receive data when it comes to arrays?



Also can the same be applied it the processing of data is in contiguous memory locations? for eg:

mpiifort and 'end-of-file during read'

Hi,

I'm trying to run an MPI code compiled with mpiifort. At the begenning of the code, while trying to open and read an ASCII file (the file is read by all processors)

file_name = 'Donnees_Simulation.dat'   
Open(11,File = file_name) 
Read(11,*);Read(11,*)

Unfortunately, I received this error where line 79 corresponds to the first Read applied to the imput file : 

Running MPI over heterogeneuos Infiniband Nework

Hello,

I have a setup of Infiniband network where we are testing the performance of FDRs. I have two FDRs on the sender and one FDR on each receiver. There are two receivers. The idea is to run parallel sends from the sender on each FDR and receive it at receiver. We were trying mvapich, but mvapich stated that they clearly don't support such a network.

I was wondering if Intel MPI support such a network. And if we can do something like:

mpirun -n 2 -hosts Sender,Receiver1 -env MV2_IBA_HCA=mlx4_0 ./exec : -n 2 -hosts Sender,Receiver2 -env MV2_IBA_HCA=mlx4_1 ./exec

Placing MPI ranks on the specific cores of Xeon Phi

Hi..Is it possible to place the MPI ranks on the specific cores of Xeon Phi (1-60) during the native mode computation. All that I understand is that the scheduler assigns ranks in round robin fashion on all the nodes starting from the first available core. Is it possible to overwrite this?

Using Intel MPI over Rdma over Converged Ethernet

Hello All,

I have been trying to run IMB over 40GigE RoCE using IntelMPI and OpenMPI. I could run it using OpenMPI using the following FAQ 

https://www.open-mpi.org/faq/?category=openfabrics#ompi-over-roce.

But I have been having issues running IMB using Intel MPI. I did not see many resources online. I have been trying to run it as follows.

large scale MPI error: HYDT_bscu_wait_for_completion

I am running a job on a 4,000-node cluster with Infiniband. For small scale like 8 to 64 node, command mpirun works well; and for medium sclale like 256 to 512 node, mpiexec.hydra has to be used; but when it goes up to 1024 node, I got errors, see attached. My job script is like this:

module load intel-compilers/12.1.0
module load intelmpi/4.0.3.008
#mpirun -np 64 -perhost 1 -hostfile $PBS_NODEFILE ./paraEllip3d input.txt
mpiexec.hydra -np 1000 -perhost 1 -hostfile $PBS_NODEFILE ./paraEllip3d input.txt

The errors from 1024 nodes are:

sshconnectivity

 

hello,

I connected coprocessor like

       # ssh mic0

it didn't need a password before.

but after I run this command below ssh asked password.

         # ./sshconnectivity.exp machines.LINUX

 

What should I do for recovery like before.

I just run sshconnectivity.exp

Please help,

shm:dapl mode crashes when using MLNX OFED 2.1-1.0.0

Hi,

Simple mpi-helloworld mpi program crashes when using shm:dapl mode and MLNX OFED 2.1-1.0.0 IB stack. shm:ofa works fine. shm:dapl mode used to work fine with MLNX OFED 1.5.3 but latest el6.5 kernel requires 2.1-1.0.0 version.

intelmpi/4.1.3

I_MPI_FABRICS=shm:dapl srun -pdebug -n2 -N2 ~/mpi/intelhellog

Assine o Intel® Clusters and HPC Technology