I've installed the cluster toolkit on a cluster of RedHat EL 4.5 quad-core Opteron machines connected with Infiniband and gigE. The sock and rdma devices work fine, but shm and ssm don't. Should they?
I am using the intel cluster toolkit to run a cluster. However I am getting errors during the installation of the expect software.
It does not recognize the operating system which is RedHat enterprise Server 5.0
Is there any alternative software to the expect software to establish the ssh connectivity between the machines?
This is my first post to the group.
I am not really a software developer, more a learning Network Architect recently working in a HPC-type field. I have become very interested in this field recently and hope to give back to the community as I learn more myself.
I hope someone will be able to provide more info or link to some sites regarding my questions. If you don't have any answers, maybe some suggestions on how I could find the answer out myself...
Running on a shared memory machine, and in a current directory j:\myprog - where J: is mapped tp c:\users\john\documents, I discovered that all the mpi instances report the current directory as c:\windows\system32. After some hair loss, I tried switching to the canonical directory - c:\users\john\documents\myprog - and it works. It also works if I use -wdir to specify the canonical directory, but not if I use the mapped drive with -wdir.
Is this a bug?
I`ve installed mpi 3.2 on C2Q 9550 with opensuse 11.1. I`ve done it before on opensuse 11.0 and never had any problems.
how to send a structure with a pointer variable as one of its members..
for eg, i've to send the following structure
struct sample *l,*r;
can u pls tell me how to send the atructure like one above,in MPI.
Dear whom it may concern:
I have tried to trace socket communication in distributed application
using the Intel Trace Collector(ITC) 7.2
But, I feel hard because I can't consult some example(or guide)
sources(or projects) for socket communication.
So, ifsomeone hasa example or usage, addition information(except
Reference Guide) of socket communication using ITC, please send me the files.
I would greatly appreciate your prompt reply
MPI V3.2 single-user seat can be used in HPC env?
Is MPI v3.2 installed in a login node,and be shared though NFS server?
Is this legal?
I am relatively new to MPI programming.
I am wondering how I can start up each process manually within the same MPI communicator world space.
In addition, when only a single process fails, how can this be detected and relaunched automatically, without crashing the other processors and the host?
Hope somebody can advise on this two issues. Thanks
I'd like to know if thereare tools (aside from papi) that can be used to obtain hardware counter information such as floating point operations, cache misses, cache access for and application on Nehalem processor.