Intel® Cluster Ready

Intel® Parallel Studio XE 2016 Beta program has started!

The Intel® Parallel Studio XE 2016 Beta program is now available!

In this beta test, you will have early access to Intel® Parallel Studio XE 2016 products and the opportunity to provide feedback to help make our products better. Registration is easy through the pre-Beta survey site.

This suite of products brings together exciting new technologies along with improvements to Intel’s existing software development tools:

Problem with Intel Trace Collector

I'm trying to use for the first time the intel trace collector on a cluster machine (with the intel xe 2013 and the itac 8.1.2.033).

I built my program in the standard production mode and in the bash script submitted to the PBS scheduler there were the following commands:

#PBS -l select=14:ncpus=16:mem=120gb:mpiprocs=16

module load intel/cs-xe-2013

source [path]/cs-xe-2013/none/itac/8.1.2.033/bin/itacvars.sh

mpirun -trace [path]/my_program [arguments]

Deadlock with MPI_Win_fence going from Intel MPI 4.1.3.049 to 5.0.3.048

We encountered a problem when migrating a code from Intel MPI 4.1.3.049 to 5.0.3.048. The code in question is a complex simulation that first reads global input state from disk into several parts in memory and then accesses this memory in a hard to predict fashion to create a new decomposition. We use active target RMA for this (on machines which support this like BG/Q we also use passive target) since a rank might need data from the part that is at another rank to form its halo.

[UPDATED] : Maximum MPI Buffer Dimension

HI,

there is a maximum dimension in MPI buffer size? I have a buffer dimension problem with my MPI code when trying to MPI_Pack large arrays. The offending instruction is the first pack call:

CALL MPI_PACK( VAR(GIB,LFMG)%R,LVB,MPI_DOUBLE_PRECISION,BUF,LBUFB,ISZ,MPI_COMM_WORLD,IE )

where the double precision array R has LVB=6331625 elements, BUF = 354571000, and LBUF = BUF*8 = 2836568000 (since I have to send other 6 arrays with the same dimension as R).

The error output is the following:

MPI_Recv block a long time

hello:
    I get into trouble when use MPI_Recv in my programmes. 
    My programme start 3 subprocess,and bind them to cpu 1-3 respectively. In each subprocess, first disabled interrupts , then send message to other process and receive from others. Repeat it a billion times.
    I except that MPI_Recv will return in a fixed times ,and not use MPI_irecv instead.
    In order to do that, i disabled interrupts and cancel ticks on cpu1-3,remove other process from cpu 1-3 to cpu 0,and bind interrupts to cpu0.

Run MPI job on LSF for Windows

When I ran a MPI job on Linux using LSF, I just use bsub to submit the following script file and 

#!/bin/bash
#BSUB -n 8
#BSUB -R "OSNAME==Linux && ( SPEED>=2500 ) && ( OSREL==EE60 || OSREL==EE58 || OSREL==EE63 ) &&
SFIARCH==OPT64 && mem>=32000"
#BSUB -q lnx64
#BSUB -W 1:40
cd my_working_directory
mpirun  mympi

The system will start 8 mympi jobs.  I don't need to specify machine names in the mpirun command line. 

Fault Tolerance Question

Hello there,

I am trying to do some experiments with fault tolerance on MPI with FORTRAN, but I'm having troubles. I am calling the routine

  CALL MPI_COMM_SET_ERRHANDLER(MPI_COMM_WORLD, MPI_ERRORS_RETURN, ierr)

which seems to work more or less. After calling, for instance, MPI_SENDRECV, the variable STATUS does not report any error, i.e. STATUS(MPI_ERROR) is always zero. The ierr integer may be nonzero though, and that's what I've been trying to catch instead.

订阅 Intel® Cluster Ready