Intel® Clusters and HPC Technology

[UPDATED] : Maximum MPI Buffer Dimension

HI,

there is a maximum dimension in MPI buffer size? I have a buffer dimension problem with my MPI code when trying to MPI_Pack large arrays. The offending instruction is the first pack call:

CALL MPI_PACK( VAR(GIB,LFMG)%R,LVB,MPI_DOUBLE_PRECISION,BUF,LBUFB,ISZ,MPI_COMM_WORLD,IE )

where the double precision array R has LVB=6331625 elements, BUF = 354571000, and LBUF = BUF*8 = 2836568000 (since I have to send other 6 arrays with the same dimension as R).

The error output is the following:

MPI_Recv block a long time

hello:
    I get into trouble when use MPI_Recv in my programmes. 
    My programme start 3 subprocess,and bind them to cpu 1-3 respectively. In each subprocess, first disabled interrupts , then send message to other process and receive from others. Repeat it a billion times.
    I except that MPI_Recv will return in a fixed times ,and not use MPI_irecv instead.
    In order to do that, i disabled interrupts and cancel ticks on cpu1-3,remove other process from cpu 1-3 to cpu 0,and bind interrupts to cpu0.

Run MPI job on LSF for Windows

When I ran a MPI job on Linux using LSF, I just use bsub to submit the following script file and 

#!/bin/bash
#BSUB -n 8
#BSUB -R "OSNAME==Linux && ( SPEED>=2500 ) && ( OSREL==EE60 || OSREL==EE58 || OSREL==EE63 ) &&
SFIARCH==OPT64 && mem>=32000"
#BSUB -q lnx64
#BSUB -W 1:40
cd my_working_directory
mpirun  mympi

The system will start 8 mympi jobs.  I don't need to specify machine names in the mpirun command line. 

Intel MPI gives wrong number of physical cores on core i7 Q820?

Hi,

I have begun learning MPI on my Dell 4500 with a Core i7 Q820 processor (4 physical and 8 logical cores).

When I run a simple program in Fortran to get the rank and size, i get 0 and 1 instead of 0 and 3 (see attached code).

What is wrong?

Best regards

Anders S

Fault Tolerance Question

Hello there,

I am trying to do some experiments with fault tolerance on MPI with FORTRAN, but I'm having troubles. I am calling the routine

  CALL MPI_COMM_SET_ERRHANDLER(MPI_COMM_WORLD, MPI_ERRORS_RETURN, ierr)

which seems to work more or less. After calling, for instance, MPI_SENDRECV, the variable STATUS does not report any error, i.e. STATUS(MPI_ERROR) is always zero. The ierr integer may be nonzero though, and that's what I've been trying to catch instead.

MODULEFILE creation the easy way

If you use Environment Modules  (from Sourceforge, SGI, Cray, etc) to setup and control your shell environment variables, we've created a new article on how to quickly and correctly create a modulefile.  The technique is fast and produces a correct modulefile for any Intel Developer Products tool.

The article is here:  https://software.intel.com/en-us/articles/using-environment-modules-with-the-intel-compiler

How to increase performance of MPI one-sided performance with Intel MPI?

Hello,

we have an application with basically two (last) sequence of actions in the domain decomposition:

one set of tasks (subset a) calls

call mpi_win_lock(some_rank_from_subset_b)
call mpi_win_get(some_rank_from_subset_b)
call mpi_win_unlock(some_rank_from_subset_b)

the others (subset b) are stuck in the MPI_Barrier at the end of the domain decomposition. This performs nicely (passes domain decomposition within seconds) with MVAPICH on our new Intel Xeon machine and on another machine with IBM BlueGene/Q hardware.

Intel MPI suddenly exited

Hello,

I installed intel MPI on windows 7 x64 and executed "> mpiexec -n 4 program.exe", it seemed to be running fine for about 30h and was using the appropriate resources expected. However, the process suddenly exited with about 10h remaining in the computation with the following error stack:

"

[mpiexec@Simulation-PC] ..\hydra\pm\pmiserv_cb.c (773): connection to proxy 0 at host Simulation-PC failed

Suscribirse a Intel® Clusters and HPC Technology