Intel® Clusters and HPC Technology

Questions on Intel MPI and Intel Cluster MKL

I installed Intel MPI 2.0and Intel Cluster MKL and wanted to do the scalapack testing. However, when I tried to compile the scalapack tests in the cmkl directory, I have the following message:

(cd source/TESTING; make ROOTdir=/opt/intel/cmkl/8.0/tests/scalapack LIBdir= MPIdir= arch=ia32 mpi=intelmpi20 comp=intel opt=O2 dynamic= arch=ia32 dynamic=no exe run)
make[1]: Entering directory `/opt/intel/cmkl/8.0/tests/scalapack/source/TESTING'

Strange result on MPI::Gather benchmark

Hi,

I try some basic benchmark on a small cluster (educational). It's P4 2,4GHz, 512MRam with National Semiconductor 83820 ethernet Gb card (mtu 1500 for this test).

I do a benchmark on 2, 4 ,6 , 8 , 10 and 12 nodes. With 2 nodes I have this strange result:
http://aspirine.li/mesure2.pdf
on Ox it's the size of the packets in bytes, on Oy it's time in seconds.

We see very well the mtu effects, but I can't explain the two curves!?
This is what I have done:

Single Developer license question

We are a small group (3 developers) that are interested in using INTEL MPI. Could you please explain exactly what a "SINGLE Developer" license means?? Does that mean it is tied to one login id, or does that mean it can only be used by one person at a time. We don't build MPI applications very often, so if 3 developers can use one INTEL MPI development environment, that would work for us.

survey on the usage of parallel programming systems

Hi,

have you ever wanted to know, what parallel programming systems / languages the rest of the world is using? Or what platforms parallel applications are developed for? Or what organizations are performance hungry enough to take on the burdon of developing parallel applications?

Me too. And that is why I have set up a short survey on parallel programming as part of my research for my PhD. It is set up here:

http://www.plm.eecs.uni-kassel.de/parasurvey/

Problem with mpdboot

Hi everyone,

I'm trying to get the intel MPI library to work on a cluster with 16 nodes. I'm following the instructions as outlined in the "Getting_Started.pdf" file under the "Setting up MPD Daemons" section. I'm at the point where I am supposed to start the MPD daemon with mpdboot. I use the command:

mpdboot -v -n 16 -r ssh -f .mpd.hosts

Things start to boot properly, but then I get an error message saying that the syntax of the mpdboot.py file is incorrect:

ITC problems


Clay,
I'm using IA-32(16 Dual-xeon processor cluster).
No, I don't think the problem is with the program.
This is the 'hello world' example and it run with
either MPICH2 and IMPI1 without ITC.
I also tried other more complex cases, and they fail
only when I use ITC.
I guess my problem
must be with the installation of ITC 6, although
I followed the instructution in the user guide.
Is there any simple check I can run to test the
installation?
Should I try v.5?

Thank you
Paolo

Itanium2 - Rocks problem

Hi,

I am trying to run an atmospheric code on Itanium-2 (2 x 4-way smp) with Rocks. The code is mixed F77/90 and uses domain decomposition for parallelisation. It contains only point-point mpi communications.

I have used following mpi with same results:

a. Intel_mpi_10 (IFC 8.0 & IFC 9.0)
b. Intel MPI 2.0 (beta) (IFC 9.0)
c. mpich-1.2.7 (IFC 8.0 & IFC9.0)

problems with Intel Trace Collector


Hi all,
I've installed Intel Trace Collector 6 on Red Hat Linux and I useIntel Fortran 9 andMPICH 2.
After compiling the sample program in intel/mpi/1.0.2/test with

mpif90 test.f90 -c

and linking with

mpif90 test.o -L{VT_ROOT}/lib -lVT -ldwarf -lelf -lnsl -lm -lpthread -o ftest

I get the follwing error message

aborting job:
Fatal error in MPI_Comm_dup: Invalid communicator, error stack:
MPI_Comm_dup(171): MPI_Comm_dup(comm=0x5b, new_comm=0xbfffc250) failed

S’abonner à Intel® Clusters and HPC Technology