Intel® Cluster Ready

MPI_Sendrecv problem

dear all,

I have a problem with  MPI_Sendrecv, probably because I do not understand it.

In my program I create a Cartesian topology:

   periods = .FALSE.
   CALL MPI_CART_CREATE (MPI_COMM_WORLD,ndims,dims,periods,.TRUE.,COMM_CART,MPI%iErr)

and I find all the neighbors:

MPI_Sendrecv problem

dear all,

I have a problem with  MPI_Sendrecv, probably because I do not understand it.

In my program I create a Cartesian topology:

   periods = .FALSE.
   CALL MPI_CART_CREATE (MPI_COMM_WORLD,ndims,dims,periods,.TRUE.,COMM_CART,MPI%iErr)

and I find all the neighbors:

simple MPI code generating deadlock

Hello everyone,

I hope this is the appropriate forum for this question. I have recently started learning MPI, and can't seem to figure out why the following codes generating deadlock which occurs is subroutine try_comm. I compiled and ran as follows

mpiifort global.f90 try.f90 new.f90 -o new.out

mpirun -n 2 ./new.out

my output:

MPI_Finalize Error Present with mpiicpc.

I have been having trouble with the intel-compiled version of a scientific software stack.

The stack uses both OpenMP and MPI. When I started working on the code, it had been compiled with gcc & a gcc-compiled OpenMPI. Prior to adding any MPI code, the software compiles with icpc and runs without error.

The versions I am working with are: Intel compiler version 14.0.2, Intel mkl 11.1.2, and Intel MPI 4.1.3. I have tried turning up the debug level I_MPI_DEBUG to get more informative messages, but what I always end up with is:

trivial code fails sometimes under SGE: HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_ poll.c:70): assert (!(pollfds[i].rev

A trivial ring-passing .f90 program fails to start 50% of the time on our cluster (SGE 6.2u5). The same problem occurs with large codes:

The error message:

Iscriversi a Intel® Cluster Ready