Intel® Clusters and HPC Technology

Using new unified installer for Cluster Tools 3.2.2 - Slow

I downloaded the new cluster toolkit 3.2.2.

Little 128 node cluster with master node.

The install scipt installs the rpms onto the master node.
Then it is taking quite awhile to 'Analyze node configuration'
Watching it's progress: It's taken 46 minutes to process 23 nodes.
I'm guessing the installer is attempting to discern network topology, or if any compute nodes share the install directory.
Though, when prompted by the installer, I selected 'non-shared' install, i.e. /opt/intel is local on each compute node.

Running mpi w/o mpiexec/mpirun

I can compile and run a mpi application using mpiexec without any difficulties.
What I want to do is to run a mpi application not in Terminal,I mean without using mpiexec or mpirun.
Is this doable? If it is, how can I setup mpd daemon and other necessary stuffs in source level?
Thanks. 

==========
hello.c
==========
#include 
#include 

static int run()
{

Problems installing cluster toolkit under windows server 2008

Hi all,

during the installation of Cluster Toolkit for Windows HPC Server 2008 I had some "noncritical errors":

An error occured in Merge Module SUBST.
Installation will be continued but NOT all of the following files will be configured appropriately:

C:\Program Files (x86)\Intel\ICTCE\3.2.1.015\Compiler\bin\ifortvars.bat

etc.

After installing Cluster Tookit I have checked build environment with the following output:

.nsfxxxx files generated at execution with mpi

Hello

When I use my software running with Intel MPI 2.0 (sorry can't update MPI for the moment), I can see numerous temporary files named .nfsXXXXXXXX which worried my customers . The process running with MPI needs 4 files in input and generates4 files in output, all these files are read and written on a remote disk via the network and the I use 4 cpus.
Can you tell me more about these files, is there a relationship between the number of .nsfxxx files and the files in input/output?

Intel Fortran debugger and MPICH

Hi,
I want to debug this parallel Fortran program that I'm trying to make run on a Linux-type cluster using the Intel Fortran compiler (v.9.1.045) and debugger (v.9.1-28) and MPICH2.1.2. While building the executable with the -g option is straightforward, but when trying to invoke the compiler it crashes with this message:
$ idb -parallel mpiexec -machinefile machines -n 4 ./stagyympi
Intel Debugger for applications running on IA-32, Version 9.1-28, Build 20070305
execve failed: No such file or directory
Error: could not start debuggee

how to distribute data to different computing nodes, using MPI

Dear all, I am started to using MPI for a simple data decomposition of a 2-D domain. Assuming that I am using 2 computing nodes, each having 8 processors, I want to make message pass only between the two nodes, while inside each node, all processors can access their shared memory.
After calling MPI_rank and receiving 0~15 for processor rank, how can I know to which node a processor belongs? Do processors with rank 0 to 7 belong to computing node1 and 8 to 15 belong to computing node 2?

Iscriversi a Intel® Clusters and HPC Technology