Intel® Cluster Studio XE

Problem with IntelIB-Basic, Intel MPI and I_MPI_FABRICS=tmi

Hi,

We have a small cluster (head node + 4 nodes with 16 cores) using Intel infiniband. This cluster is under CentOS 6.6 (with the kernel of CentOS 6.5).

On this cluster Intel Parallel Studio XE 2015 is installed. I_MPI_FABRICS is set per default to "tmi" only.

When I start a job (using torque+maui) on several nodes, for example this one:

Intel® Trace Analyzer and Collector 9.0 Update 2 Readme

The Intel® Trace Analyzer and Collector for Linux* and Windows* is a low-overhead scalable event-tracing library with graphical analysis that reduces the time it takes an application developer to enable maximum performance of cluster applications. This package is for users who develop on and build for Intel® 64 architectures on Linux* and Windows*, as well as customers running on the Intel® Xeon Phi™ coprocessor on Linux*. You must have a valid license to download, install and use this product.

  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Server
  • C/C++
  • Fortran
  • Intel® Trace Analyzer and Collector
  • Message Passing Interface
  • Cluster Computing
  • Intel® MPI Library 5.0 Update 2 Readme

    The Intel® MPI Library is a high-performance interconnect-independent multi-fabric library implementation of the industry-standard Message Passing Interface, v3.0 (MPI-3.0) specification. This package is for MPI users who develop on and build for Intel® 64 architectures on Linux* and Windows*, as well as customers running on the Intel® Xeon Phi™ coprocessor on Linux*. You must have a valid license to download, install, and use this product.

  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Server
  • C/C++
  • Fortran
  • Intel® MPI Library
  • Message Passing Interface
  • Cluster Computing
  • mpiexec starts multiple processes of same rank 0

    Hello everybody

    I am using the new Intel Composer XE 2015 which comes with a version of Intel MPI. I have been using OPENMPI before and compiled with g++ and my programs ran fine. Now with the new Intel Compiler I am able to compile and run the application but if I execute for example mpiexec -n 12 test_program the programm is executed 12 times but all the time with rank 0 and the processes do not communicate and assume they are on their own. So i assume something is wrong with the mpiexec script or maybe some message passing daemon. Thanks in advance for your answers

    Best

    IBM 4.0 and data-check

    The data-check compile-time option seems poorly debugged.

    To reproduce:

    1. Compile IMB-MPI1 with data-check enabled (-DCHECK)

    2. Create a msg_lengths file (for L in `seq 0 100`; do echo $L >> msg_len.txt; done)

    3. Run with your favorite MPI implementation using two processes, the simples possible way, with the following arguments to IMB-MPI1: 

       -msglen msg_len.txt -iter 1 Exchange

    and terrible things happens.

    For example, with Open MPI and the command line:

    Linking with mpiicc (impi 5.0.1)

    Hello,
    I have been trying to configure (and compile) the PETSc library with impi 5.0.1 (and ifort Version 15.0.0.090 Build 20140723) using mpiicc script for C compilation. However, the configure process fails with the error "...compiler mpiicc is broken! It is returning a zero error when the linking failed..."

    I think that there might be an issue with the following code snippet located at the end of the mpiicc script:

    mpiexec hang after program exit

    Out product includes GUI and engine parts. Intel MPI  4.1 was used in engine code. GUI will can engine through mpiexec. Everything works fine on Windows 7 and Windows server 2008. When we run the product on Windows 8 and Windows server 2012, we met some problem. 

    The code to start engine looks like CreateProcess( NULL, "mpiexec - n 2 engine", NULL, ........) . After engine exit, mpiexec still hang in memory and cannot exit. This will cause GUI hang. If we run the engine form command, mpiexec can exit after engine exit. 

    Tuning Intel MPI for Phi

    Does setting

        I_MPI_MIC=enable

    change other MPI environment variables, particularly any that would tune MPI for the MIC system architecture?  

    As a side question, has anyone written a Tuning and Tweaking guide for IMPI for Phi?  For example, what I_MPI variables could one use to help tune an app targeting 480 ranks across 8 Phis?

    Thanks

    Ron

    Subscribe to Intel® Cluster Studio XE