Intel® Clusters and HPC Technology

Intel MPI fatal error

Hi,

We compiled a code which is able to performe atomistic simulations.
The code fail with the folowing error.
I'll be thankful ifyou can help me in fixing this prolem.

Thank you in advance,
Fouad

[0:node44][../../dapl_module_poll.c:3972] Intel MPI fatal error: OpenIB-cma DTO operation posted for [2:node58] completed with error. status=0x1. cookie=0x40002

Assertion failed in file ../../dapl_module_poll.c at line 3973: 0

internal ABORT - process 0

How to bind MPI process to core from mpirun argument

Dear Intel,

I use "sched_setaffinity" in the code to pin MPI process to core. But can only do so if I have access to source code.Of course, I can pin it after the code is running, but sometimes this is not a good solution since pin will need to be done on after process been created but before it starts to execute the compute kernel.
So, a very simple question, isthere an option in mpirun (or mpiexec) such that I can pin the MPI process to core? For example, something like this:

mpirun -nc 2 -pincore 0 6 -np 10 .....

Errno: 10055 - insufficient buffer space - queue full when using MPI_Send

Hi,

I have done some testing and found that I get error 10055 http://www.sockets.com/err_lst1.htm#WSAENOBUFS which occurs when I am doing a syncronised send. I am using boost.mpi and have found that this occurs when using send and isend and then mpi_wait.

From my understanding if I do a syncronised send, the buffer can be resused once the send has handshaked, but it doesn't appear to be the case.

I am using standard tcp on ethernet as my backbone.

MPI problem with shm fabric

Hi,

I get the error shown below when trying to run the "hello world" test programwith Intel MPI 4.0.1. It does not occur when running on the same node, only across nodes. If I set I_MPI_FABRICS=tcp, it works fine, but if I set I_MPI_FABRICS=shm:tcp, it also fails. Any ideas?

Fatal error in MPI_Init: Other MPI error, error stack:

MPIR_Init_thread(527).................: Initialization failed

MPID_Init(171)........................: channel initialization failed

MPIDI_CH3_Init(70)....................:

MPID_nem_init_ckpt(665)...............:

Intel MPI error((

Hello, we have a little cluster with Rocks Cluster Distribution.

Intel Cluster Toolkit installed to shared filesystem, path's is ok, ssh pass-less access working.

I select 3 nodes: 1 headnode and 2 computational, and put into file mach 3 lines: headnode, node1, node2

on headnode i run:

mpirun -r ssh -machinefile mach -np 3 ./test.mpi

Problem with MPI

Hello, my name is Evgeny. I wonder your MPI-library. I use Linux-system. When I installed your library, I have a problem with different mpi-commands, such as mpiboot. At the terminal I see the same error
mpdboot_e.ustimenko (handle_mpd_output 420): from mpd on e.ustimenko, invalid port info:no_portWhere I can write the information about port. And how it should be?Thanks a lot.

Using Hydra with Intel MPI

I'm having trouble using the Hydra process manager with Intel MPI. I'm using the version of Intel MPI distributed with Intel Fortran Composer XE 2011. When I run anything with mpiexec.hydra, I get the following error message:

bash: /opt/intel/composerxe-2011.0.084/mpirt/bin/intel64/pmi_proxy: No such file or directory

Any advice on how to get Hydra working would be greatly appreciated.

Thanks,
Paul

Subscribe to Intel® Clusters and HPC Technology