Message Passing Interface

Deadlock with MPI_Win_fence going from Intel MPI 4.1.3.049 to 5.0.3.048

We encountered a problem when migrating a code from Intel MPI 4.1.3.049 to 5.0.3.048. The code in question is a complex simulation that first reads global input state from disk into several parts in memory and then accesses this memory in a hard to predict fashion to create a new decomposition. We use active target RMA for this (on machines which support this like BG/Q we also use passive target) since a rank might need data from the part that is at another rank to form its halo.

[UPDATED] : Maximum MPI Buffer Dimension

HI,

there is a maximum dimension in MPI buffer size? I have a buffer dimension problem with my MPI code when trying to MPI_Pack large arrays. The offending instruction is the first pack call:

CALL MPI_PACK( VAR(GIB,LFMG)%R,LVB,MPI_DOUBLE_PRECISION,BUF,LBUFB,ISZ,MPI_COMM_WORLD,IE )

where the double precision array R has LVB=6331625 elements, BUF = 354571000, and LBUF = BUF*8 = 2836568000 (since I have to send other 6 arrays with the same dimension as R).

The error output is the following:

MPI_Recv block a long time

hello:
    I get into trouble when use MPI_Recv in my programmes. 
    My programme start 3 subprocess,and bind them to cpu 1-3 respectively. In each subprocess, first disabled interrupts , then send message to other process and receive from others. Repeat it a billion times.
    I except that MPI_Recv will return in a fixed times ,and not use MPI_irecv instead.
    In order to do that, i disabled interrupts and cancel ticks on cpu1-3,remove other process from cpu 1-3 to cpu 0,and bind interrupts to cpu0.

Run MPI job on LSF for Windows

When I ran a MPI job on Linux using LSF, I just use bsub to submit the following script file and 

#!/bin/bash
#BSUB -n 8
#BSUB -R "OSNAME==Linux && ( SPEED>=2500 ) && ( OSREL==EE60 || OSREL==EE58 || OSREL==EE63 ) &&
SFIARCH==OPT64 && mem>=32000"
#BSUB -q lnx64
#BSUB -W 1:40
cd my_working_directory
mpirun  mympi

The system will start 8 mympi jobs.  I don't need to specify machine names in the mpirun command line. 

Fault Tolerance Question

Hello there,

I am trying to do some experiments with fault tolerance on MPI with FORTRAN, but I'm having troubles. I am calling the routine

  CALL MPI_COMM_SET_ERRHANDLER(MPI_COMM_WORLD, MPI_ERRORS_RETURN, ierr)

which seems to work more or less. After calling, for instance, MPI_SENDRECV, the variable STATUS does not report any error, i.e. STATUS(MPI_ERROR) is always zero. The ierr integer may be nonzero though, and that's what I've been trying to catch instead.

MODULEFILE creation the easy way

If you use Environment Modules  (from Sourceforge, SGI, Cray, etc) to setup and control your shell environment variables, we've created a new article on how to quickly and correctly create a modulefile.  The technique is fast and produces a correct modulefile for any Intel Developer Products tool.

The article is here:  https://software.intel.com/en-us/articles/using-environment-modules-with-the-intel-compiler

Android-多线程断点下载详解及源码下载

本项目完成的功能类似与迅雷等下载工具所实现的功能——实现多线程断点下载。
主要设计的技术有:
1、android中主线程与非主线程通信机制。
2、多线程的编程和管理。
3、android网络编程
4、自己设计实现设计模式-监听器模式
5、Activity、Service、数据库编程
6、android文件系统
7、缓存

博文链接:
Android-多线程断点下载详解及源码下载(一)
Android-多线程断点下载详解及源码下载(二)
Android-多线程断点下载详解及源码下载(四)

Subscribe to Message Passing Interface