Interface de transferência de mensagens

need to type "Enter" ?

Hi, Everyone,

I am running my hybrid MPI/OpenMP jobs on 3-nodes Infiniband PCs Linux cluster. each node has one MPI process that has 15 OpenMP threads. This means my job runs with 3 MPI processes and each MPI process has 15 threads.

the hosts.txt file is given as follows:

coflowrhc4-5:1
coflowrhc4-6:1
coflowrhc4-7:1

 I wrote the following batch file as follows:

/************** batch file******************/

Intel MPI 5.0.3.048 and I_MPI_EXTRA_FILESYSTEM: How to tell it's on?

All,

I hope the Intel MPI experts here can help me out. Intel MPI 5.0.3.048 was recently installed on our cluster, a cluster that uses a GPFS filesystem. Looking at the release notes I saw that "I_MPI_EXTRA_FILESYSTEM_LIST gpfs" was now available. Great! I thought I'd try to see if I can see an effect or not.

Cannot use jemalloc with IntelMPI

Hi,

I've tried to bench several memory allocators on Linux (64-bit) such as ptmalloc2, tcmalloc and jemalloc with an application linked against IntelMPI (4.1.3.049).

Launching any application linked with jemalloc will cause the execution to abort with a signal 11. But the same application, when not linked with IntelMPI will work without any issue.

Is IntelMPI doing its own malloc/free ?
How can this issue be overcome ?

Thanks,
Eloi

 

problem when multiple MPI versions installed

Dear all,

I have a problem to launch processes when multiple MPI versions installed. The processes work before I installed latest MPI 5.0.3.048:

C:\Program Files (x86)\Intel\MPI\4.1.3.047>mpiexec -wdir "Z:\test" -mapall -hosts 10 n01 6 n02 6 n03 6 n04 6 n05 6 n06 6 n07 6 n08 6 n09 6 n10 6 Z:\test

However, after I installed MPI 5.0.3.048, the following errors displayed when I launch mpiexec in the environment of 4.1.3.047:

Aborting: unable to connect to N01, smpd version mismatch

 

Intel MPI 3.2.2.006 and Supported OS and number of cores per machine

Has intel made a statement as to the last know good version of Red Hat that supports Intel MPI 3.2.2.006.  We have a new cluster with 20 cores/per node and have observed a fortran system call failing when more than 15 core per node are used.  This machine is running Red Hat 6.6 x86_64.

Alternatively are their known conditions where Intel MPI 3.2.2.006 will fail.

Bug Report : MPI_IRECV invalid tag problem

Hi, there,

In MPI_5.0.3, the MPI_TAG_UB  is set to be 1681915906.  But internally, the  upper bound is  2^29 = 536870912, as tested out by the code attached.

Same code will run just fine in MPI 4.0.3.

Just to let you guys know the problem.

Hope to see the fix soon. Thanks.

Xudong

Encl:

2-1. Source Code

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

mpivars.sh to build path automatically

Looks like on Linux the intel MPI runtime hardcodes the path of mpivars.sh, eg:
I_MPI_ROOT=/opt/intel/impi/4.1.3.049; export I_MPI_ROOT

On windows on the other hand the path is dynamically generated:
SET I_MPI_ROOT=%~dp0..\..

Is there any reason to why the path cannot also be automatically generated on Linux?
http://stackoverflow.com/questions/242538/unix-shell-script-find-out-whi...

License File Activation

I just received my serial number of single user Intel Cluster Studio 2015 (Linux).  I had done product registration and generated a license file.  However, the license file generation page didn't show any information or steps of "how to apply the license file"...

I am using an evaluation version of Intel Cluster Studio 2015 in Linux workstation.   So, I want to use the above license file.  I copy the license file into /opt/intel/license folder.  Do I need to execute any command?  Is there any guidelines or info.

Thx.

Assine o Interface de transferência de mensagens