Intel® Clusters and HPC Technology



I prepare network to Intel Cluster Toolkit installation. For test I get two computers.
- I called them master and node1.
- I added to /etc/hosts of both ip addresses.
- ping master and ping node1 commands works on both computers
- I created machines.LINUX file with lines:

My problem is when I run sshconnectivity.exp on master the only master's ssh settings are changed. No ~/.ssh directory appears on node1.

Thank you for help and regards

Error message when I_MPI_MPD_TMPDIR is set in impi 4.0.0

We configure Torque on our cluster to set TMPDIR so that it points into a local file system on each node. A number of applications running on the cluster take advantage of TMPDIR. However, because the path is very long, TMPDIR causes mpd to fail. To avoid this problem, we set I_MPI_MPD_TMPDIR to /tmp. This works without a problem for Intel MPI 3.2.2, but with 4.0.0, a spurious error message appears on standard out stating that "Can't open file /tmp/mpd2.logfile...". For example, launching the test.c sample program with the mpirun command in a Torque script we get the following.

[solved] random problems with MPI + DAPL initialization in RedHat 5.4

Hi I havesometimes problems with execution of a program with Intel MPI
It happens with an error on stderr (or stdout):
problem with execution of   on  wn20:  [Errno 13] Permission denied

What could be a problem?

here is my ulimit -a:

mpd error


I get the following error on my cluster when I submit jobs

mpiexec_node050: cannot connect to local mpd (/tmp/mpd2.console_sudharshan); possible causes:
1. no mpd is running on this host
2. an mpd is running but was started without a "console" (-n option)

building MPICH2 on IA64 with Intel compilers: atomic primitive

Hi all:

I'm trying to build mpich2 on an SGI Altix (256-processor IA-64 platform), using the Intel compilers (c/c++ and fortran). If I run configure with gcc, all works fine; the configure completes to the end successfully. However, if I run it specifying the intel compilers, it fails with an error on missing atomic primitive

Do you intent to make Itac support OpenMPI?


we used a MPI library which was mpich2-like. So, ITAC ran well with it. But now this library switched to a OpenMPI-like implementation. Then ITAC is useless.

Is there possible to implement an ITAC for OpenMPI?

(For MKL Library, there is a BLACS for each implementation of MPI Library, then I suppose you would have done something like with ITAC).

Thank you,

IntelMPI Performance

We are users of a commercial reservoir simulation model which uses a relatively old version of the IntelMPI, mpirun -V reports (Open MPI) 1.2.3.

We are trying to migrate to a new Harpertown cluster and are finding that the runtime environment has become very unreliable - simulation jobs dying at random times with some system related problem we've yet to identify. We are beginning to suspect that the older version of the IntelMPI may be part of the problem as the simulation jobs run on the new cluster with the generic mpi supplied (they are just 2X to 100X slower).

S’abonner à Intel® Clusters and HPC Technology