Intel® Clusters and HPC Technology

mpd error

Hello,

I get the following error on my cluster when I submit jobs

mpiexec_node050: cannot connect to local mpd (/tmp/mpd2.console_sudharshan); possible causes:
1. no mpd is running on this host
2. an mpd is running but was started without a "console" (-n option)

building MPICH2 on IA64 with Intel compilers: atomic primitive availability...no

Hi all:

I'm trying to build mpich2 on an SGI Altix (256-processor IA-64 platform), using the Intel compilers (c/c++ and fortran). If I run configure with gcc, all works fine; the configure completes to the end successfully. However, if I run it specifying the intel compilers, it fails with an error on missing atomic primitive availability...no.

Do you intent to make Itac support OpenMPI?

Hello,

we used a MPI library which was mpich2-like. So, ITAC ran well with it. But now this library switched to a OpenMPI-like implementation. Then ITAC is useless.

Is there possible to implement an ITAC for OpenMPI?

(For MKL Library, there is a BLACS for each implementation of MPI Library, then I suppose you would have done something like with ITAC).

Thank you,

IntelMPI Performance

We are users of a commercial reservoir simulation model which uses a relatively old version of the IntelMPI, mpirun -V reports (Open MPI) 1.2.3.

We are trying to migrate to a new Harpertown cluster and are finding that the runtime environment has become very unreliable - simulation jobs dying at random times with some system related problem we've yet to identify. We are beginning to suspect that the older version of the IntelMPI may be part of the problem as the simulation jobs run on the new cluster with the generic mpi supplied (they are just 2X to 100X slower).

mpdboot error: invalid port info: Permission denied.

Hi,

The Intel MPI: "intel-tc_impi_em64t-7.2.1p-008" is installed on a HPC cluster with a Mellanox Infiniband (MT47396 Infiniscale-III Mellanox Technologies).

I'm facing a mpdboot problem here:

Initially tried to lauch mpd on all the 100+ nodes. It failed. To debug, started to use only 2 nodes:

error: more than one instance of overloaded function "MessageHandler::debug" matches the argument list:

So I am still trying to compile a CFD code using the Intel Compiler v 11.1 and Intel MPI 3.2.0.011 on a new cluster. I have previously successfully compiled this code on a RedHat 5 clutser using GCC 4.1.2 and OpenMPI.

I am getting the error:

/opt/intel/impi/3.2.0.011/include/mpi.h(35): catastrophic error: #error directive: A wrong version of mpi.h file was included

I am trying to compile a CFD code using the Intel Compiler v 11.1 and Intel MPI 3.2.0.011 on a new cluster. I have previously successfully compiled this code on a RedHat 5 clutser using GCC 4.1.2 and OpenMPI.

I belive I haveI setup the environment properly, but I am getting the error:
/opt/intel/impi/3.2.0.011/include/mpi.h(35): catastrophic error: #error directive: A wrong version of mpi.h file was included. Check include path.

The terminal output from the compile is included at the bottom of this post.

订阅 Intel® Clusters and HPC Technology