Интерфейс проверки сообщений

Can MPI be used to parallelize a QuickWin application?


I have a Fortran QuickWin Application called GEMix in which I want to parallelize the computations in a single subroutine with no calls to QuickWin. The code is compiled and built (to GEMix.exe) in Visual Studio 2010 + Fortran Visual Cluster Studio 2016.

When I then open a command window and tries to execute GEM.exe via wmpiexec I get the error message "Can't load dynamic library".

Is it principally impossible to apply MPI to a QuickWin application or can the error be solved by a library reference to e.g. the QuickWin library?

Best regards

Anders S

how to install intel mpi from parallel studio xe 2016 cluster version linux


I have intel parallel studio xe 2016 cluster version linux. But I can not install all the cluster related tools from it. There is no option to choose cluster tools, I download the whole install package into my computer and then install it. I just see:

ITAC Lustre integration

I am using itac- on a Linux cluster to profile rank 0 and rank 15 of a small MPI application.  The code was compiled with the Intel compiler suite (2015.3.187) and impi-  The compile flags include "-g -tcollect" and I started the MPI with "mpirun -trace".  The application took a VERY long time to complete, but it did finish.  Now ITAC is writing the trace data (~2328778.00 MB) to our Lustre file system.  The problem is that I'm only getting about 50MB/s to this file system which is capable of much higher write speeds..  Does ITAC have any internal awareness of Lustre like

Installing wrappers for using Intel MPI with the PGI compilers

OS:   			Linux RH 6.6
Intel MPI version:	impi/
PGI:			pgi/15.1

I am trying to follow the instructions provided in the binding kit tarball from the following file:


Installing bindings for F77, C, and C++ was straightforward because there are no name conflicts.

However I see a problem with installing F90 bindings, particularly with the module files.  Following the instructions in the document mentioned above creates the following .mod files:

[root@sfe01 f90]# ll *.mod
-rw-r--r-- 1 root root 128646 Oct 15 19:25 mpi_base.mod

MPI_Comm_dup may hang with Intel MPI 4.1


The attached program simple_repro.c reproduces what I believe is a bug in the Intel MPI implementation version 4.1.

In short, what it does is it spawns <num_threads> threads on 2 processes, such that thread i on rank 0 is supposed to communicate with thread i on rank 1 using their private communicator. The only difference between the 2 processes involved is that the threads on rank 0 are coordinated with a semaphore, such that they can't all be active at the same time. Threads on rank 1 run freely.

HPC and Magic of OpenMP thread Affinity Management: Compare performance when it is Not Used and Used...

HPC and Magic of OpenMP thread Affinity Management: Compare performance of matrix multiply when Thread Affinity is Not Used and Used...

Two screenshots are attached for your review.


Any good tools/methods to debug MPI based program?

Dear all,

I have a MPI-based Fortran code that can run with single or two processes, however, when lunch the program with more processes, for example, 4 processes, the program crashed with the following message:

forrtl: severe (157): Program Exception - access violation
forrtl: severe (157): Program Exception - access violation

job aborted:
rank: node: exit code[: error message]
0: N01: 123
1: N01: 123
2: n02: 157: process 2 exited without calling finalize
3: n02: 157: process 3 exited without calling finalize


Mpirun is treating -perhost, -ppn, -grr the same: always round-robin

Our cluster has 2 Haswell sockets per node, each with 12 cores (24 cores/node).

Using: intel/15.1.133, impi/

Irrespective of which of the options mentioned in the subject line are used, ranks are always being placed in round-robin fashion.  The commands are being run in batch job that generates a host file that contains lines like the following when submitted with:

qsub -l nodes=2:ppn=1 ...


tfe02.% cat hostfile

Подписаться на Интерфейс проверки сообщений