Intel® Clusters and HPC Technology

mpdboot error: invalid port info: Permission denied.

Hi,

The Intel MPI: "intel-tc_impi_em64t-7.2.1p-008" is installed on a HPC cluster with a Mellanox Infiniband (MT47396 Infiniscale-III Mellanox Technologies).

I'm facing a mpdboot problem here:

Initially tried to lauch mpd on all the 100+ nodes. It failed. To debug, started to use only 2 nodes:

error: more than one instance of overloaded function "MessageHandler::debug" matches the argument list:

So I am still trying to compile a CFD code using the Intel Compiler v 11.1 and Intel MPI 3.2.0.011 on a new cluster. I have previously successfully compiled this code on a RedHat 5 clutser using GCC 4.1.2 and OpenMPI.

I am getting the error:

/opt/intel/impi/3.2.0.011/include/mpi.h(35): catastrophic error: #error directive: A wrong version of mpi.h file was included

I am trying to compile a CFD code using the Intel Compiler v 11.1 and Intel MPI 3.2.0.011 on a new cluster. I have previously successfully compiled this code on a RedHat 5 clutser using GCC 4.1.2 and OpenMPI.

I belive I haveI setup the environment properly, but I am getting the error:
/opt/intel/impi/3.2.0.011/include/mpi.h(35): catastrophic error: #error directive: A wrong version of mpi.h file was included. Check include path.

The terminal output from the compile is included at the bottom of this post.

Does the Intel Trace Analyzer and Collector support HPMPI on Windows?

I hope this is not too naive a question, but does the Intel Trace Analyzer and Collector support HPMPI on Windows?

My application is using HP-MPI 2.0 on Windows. I have added VT.lib (before the mpi libraries)to the linker libraries and get the following errors when linking the executable:

1>VT.lib(VT_fortran_consts_f.obj) : error LNK2019: unresolved external symbol __imp_MPIPRIV2 referenced in function VTTELLCONSTS

1>VT.lib(VT_fortran_consts_f.obj) : error LNK2019: unresolved external symbol __imp_MPIPRIV1 referenced in function VTTELLCONSTS

mpdboot error

I use intel impi 3.2.2.006.
When I want to mpdboot 2 hosts with debug mode:

mpdboot -v -d -r ssh -n 2 -f ./mpd.conf

the error message shows:

debug: starting
totalnum=2 numhosts=1
there are not enough hosts on which to start all processes

mpd.conf
node001:8
node002:8
And the python version in both nodes is 2.4.3.

How can I solve it?

Does "Intel MPI library" contain 'MPI Datatype Bool' ???

Hi all.

I'm trying to use Intel MPI library (evaluation version) instead of LAM/MPI.
But I couldn't find Datatype BOOL in namespace MPI (c++ version) - MPI::BOOL.
(LAM/MPI contains that type)

Other types(char, byte, int, ...) exist in /include64/mpicxx.h, but there's no entry for bool.

How can I use that ?

Thanks.

JH

Looking for equivalent options from Cray MPI to Intel MPI

Cray MPI (based on mpich2) provides ways to change several internal defaults.
They are used to get a code that my user has running on an XT. I know there
are issues with the code, but if the options exist in Intel MPI, it would get us
going faster. I have looked through the Intel MPI 4.0 Beta manual, and didn't
find any equivalents, but I am asking the question in case I missed the option
or there are some undocumented options that could help:

Intel® Clusters and HPC Technology abonnieren