Intel® Clusters and HPC Technology

ICT shared installation vs separated and ICT Live CD for dynamic node adding

Hello,

Tell me please when should I install ICT on shared space and when on each node separetly. After tests I would like to create Linux Live CD to add nodes dynamicly, what kind of installation would be better for it? If you had any general advices in adding nodes dynamicly, I would be also very greatful :)

Thank you for help and regards.

sshconnectivity

Hello,

I prepare network to Intel Cluster Toolkit installation. For test I get two computers.
- I called them master and node1.
- I added to /etc/hosts of both ip addresses.
- ping master and ping node1 commands works on both computers
- I created machines.LINUX file with lines:
master
node1

My problem is when I run sshconnectivity.exp on master the only master's ssh settings are changed. No ~/.ssh directory appears on node1.

Thank you for help and regards

Error message when I_MPI_MPD_TMPDIR is set in impi 4.0.0

We configure Torque on our cluster to set TMPDIR so that it points into a local file system on each node. A number of applications running on the cluster take advantage of TMPDIR. However, because the path is very long, TMPDIR causes mpd to fail. To avoid this problem, we set I_MPI_MPD_TMPDIR to /tmp. This works without a problem for Intel MPI 3.2.2, but with 4.0.0, a spurious error message appears on standard out stating that "Can't open file /tmp/mpd2.logfile...". For example, launching the test.c sample program with the mpirun command in a Torque script we get the following.

[solved] random problems with MPI + DAPL initialization in RedHat 5.4

Hi I havesometimes problems with execution of a program with Intel MPI
It happens with an error on stderr (or stdout):
problem with execution of   on  wn20:  [Errno 13] Permission denied

What could be a problem?

here is my ulimit -a:

mpd error

Hello,

I get the following error on my cluster when I submit jobs

mpiexec_node050: cannot connect to local mpd (/tmp/mpd2.console_sudharshan); possible causes:
1. no mpd is running on this host
2. an mpd is running but was started without a "console" (-n option)

building MPICH2 on IA64 with Intel compilers: atomic primitive availability...no

Hi all:

I'm trying to build mpich2 on an SGI Altix (256-processor IA-64 platform), using the Intel compilers (c/c++ and fortran). If I run configure with gcc, all works fine; the configure completes to the end successfully. However, if I run it specifying the intel compilers, it fails with an error on missing atomic primitive availability...no.

S’abonner à Intel® Clusters and HPC Technology