Intel® Clusters and HPC Technology


Is the Berkeley Lab Checkpoint Restart library supported in impi? With open source mpi libraries it is often a compile time choice (eg. openmpi, lam and mvapich).


We have an evaluation ifort (version 11.0) installation with which I am attempting to compile and run an MPI application; I am attempting to run on a single, two processor Intel Xeon machine running mandriva Linux release 2008.1 OS on which we have installed OpenMPI (openmpi-1.2.8-1mdv2008.1). The MPI application in question has been run successfully on this machine after being compiled with ifc, gfortran and lf95. After compiling my application with ifort, error messages generated by the MPI error handler are emitted when I attempt to run the application:

Infiniband-Intel MPI Performance MM5

Dear colleagues,

we are working in an Infiniband DDR cluster with MM5. We are using the latest Intel MPI and Fortran, and our mm5.mpp has been compiled with the configuration suggested in this website.

This is the way we launch :
[c2@ Run]$ time /intel/impi/ -genv I_MPI_PIN_PROCS 0-7 -np 32 -env I_MPI_DEVICE rdma ./mm5.mpp

link error when complie the Intel Optimized MP LINPACK Benchmark for Clusters

I have compiled the mp_linpack under the DELL M600 with Mellanox infiniband cards.
The version of intel compiler I have is 10.1.018 and the version of MKL is
I use the mp_linpack package under the benchmarks of MKL,and I changed the arch file named Make.em64t of it

The important environment variables are:
MPdir = /usr/mpi/intel/mvapich-1.1.0
MPinc = -I$(MPdir)/include
MPlib = $(MPdir)/lib/libmpich.a

Intel MPI 3.2.1

I want to develop HPL(High Performance Linpack),I got Intel MPI 3.2.1During I filled the Make.UNKNOWN,I didn't fill the items of CC and LINKER.Befor the version 3.2.0 of Intel MPI,they are mpiicc and mpiifort repectively.But there were not provided in the new version, could some one tell me the answer.

thanks a lot!

mpdallexit: cannot connect to local mpd

I get a problem while I execute "mpirun -r ssh -f mpd.hosts -n 2 ./testcpp"

mpiexec_cluster-master (mpiexec 841): no msg recvd from mpd when expecting ack o f request. Please examine the /tmp/mpd2.logfile_user log file on each node of th e ring.
mpdallexit: cannot connect to local mpd (/tmp/mpd2.console_user_090519.111321_4345); possible causes:
1. no mpd is running on this host
2. an mpd is running but was started without a "console" (-n option)

problem with mpich on windows HPC cluster


I have a cluster with windows HPC server 2007 service pack 1
I can not run communicative commands of mpi on the system.

when I run the following simple code on 2 nodes using one core on each:
use mpi
implicit real*8(a-h,o-z)

call MPI_Init ( ierr )
call MPI_Comm_rank ( MPI_COMM_WORLD, my_id, ierr )

print*,my_id, NUM_PROCS

call mpi_barrier(mpi_comm_world,ierr)

print*,my_id, NUM_PROCS

ifort 11 ipo COMMON padding breaks NPB MPI_Bcast

We verified that ifort 11.0 and 11.1 option -ipo (also implied by -fast) breaks the MPI_Bcast call in a NAS Parallel benchmark. Even though the data types in the labeled COMMON which is set up as a Bcast buffer are all 32-bit, -ipo pads some to 64-bit boundaries. This breaks the legacy assumption that COMMON padding doesn't occur except as needed to move items to a boundary which is a multiple of their size.

Subscribe to Intel® Clusters and HPC Technology