Intel® Cluster Studio

Profiling a complex MPI Application : CESM (Community Earth System Model)

Hello. 

CESM is a complex MPI climate model which is a highly parallel application. 

I am looking for ways to profile CESM runs. The default profiler provides profiling data for only a few routines. I have tried using external profilers like TAU, HPC Toolkit, Allinea Map, ITAC Traceanalyzer and VTune. 

As I was running CESM across a cluster (with 8 nodes - 16 processors each), it was most beneficial to use HPC Toolkit and Allinea Map for profiling. However, I am keen on finding two metrics for each CESM routine executed.  These are :

Performance issues of Intel MPI 5.0.2.044 on Windows 7 SP 1 with 2x18 cores cpus.

Dear support team,

I have a question about a performance difference between Windows 7 SP 1 and RHEL 6.5.

The situation is as follows:
The hardware is a DELL precision rack 7910, see link for exact specification (click on components):
http://www.dell.com/support/home/us/en/19/product-support/servicetag/3X8GG42/configuration

MPI_Init_thread or MPI_Init failed in child process

I have two programs, A and B. They all are developed with MPI. A will call B. 

If I directly start A and call B, every thing is OK.

If I start A with mpiexec, like mpiexec -localonly 2 A.exe, and call B. MPI_Init_thread or MPI_Init will fail in B. 

Below is the error message I got.

need to type "Enter" ?

Hi, Everyone,

I am running my hybrid MPI/OpenMP jobs on 3-nodes Infiniband PCs Linux cluster. each node has one MPI process that has 15 OpenMP threads. This means my job runs with 3 MPI processes and each MPI process has 15 threads.

the hosts.txt file is given as follows:

coflowrhc4-5:1
coflowrhc4-6:1
coflowrhc4-7:1

 I wrote the following batch file as follows:

/************** batch file******************/

Intel MPI 5.0.3.048 and I_MPI_EXTRA_FILESYSTEM: How to tell it's on?

All,

I hope the Intel MPI experts here can help me out. Intel MPI 5.0.3.048 was recently installed on our cluster, a cluster that uses a GPFS filesystem. Looking at the release notes I saw that "I_MPI_EXTRA_FILESYSTEM_LIST gpfs" was now available. Great! I thought I'd try to see if I can see an effect or not.

Cannot use jemalloc with IntelMPI

Hi,

I've tried to bench several memory allocators on Linux (64-bit) such as ptmalloc2, tcmalloc and jemalloc with an application linked against IntelMPI (4.1.3.049).

Launching any application linked with jemalloc will cause the execution to abort with a signal 11. But the same application, when not linked with IntelMPI will work without any issue.

Is IntelMPI doing its own malloc/free ?
How can this issue be overcome ?

Thanks,
Eloi

 

problem when multiple MPI versions installed

Dear all,

I have a problem to launch processes when multiple MPI versions installed. The processes work before I installed latest MPI 5.0.3.048:

C:\Program Files (x86)\Intel\MPI\4.1.3.047>mpiexec -wdir "Z:\test" -mapall -hosts 10 n01 6 n02 6 n03 6 n04 6 n05 6 n06 6 n07 6 n08 6 n09 6 n10 6 Z:\test

However, after I installed MPI 5.0.3.048, the following errors displayed when I launch mpiexec in the environment of 4.1.3.047:

Aborting: unable to connect to N01, smpd version mismatch

 

Intel MPI 3.2.2.006 and Supported OS and number of cores per machine

Has intel made a statement as to the last know good version of Red Hat that supports Intel MPI 3.2.2.006.  We have a new cluster with 20 cores/per node and have observed a fortran system call failing when more than 15 core per node are used.  This machine is running Red Hat 6.6 x86_64.

Alternatively are their known conditions where Intel MPI 3.2.2.006 will fail.

Bug Report : MPI_IRECV invalid tag problem

Hi, there,

In MPI_5.0.3, the MPI_TAG_UB  is set to be 1681915906.  But internally, the  upper bound is  2^29 = 536870912, as tested out by the code attached.

Same code will run just fine in MPI 4.0.3.

Just to let you guys know the problem.

Hope to see the fix soon. Thanks.

Xudong

Encl:

2-1. Source Code

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Подписаться на Intel® Cluster Studio