Intel® Cluster Ready

IBM 4.0 and data-check

The data-check compile-time option seems poorly debugged.

To reproduce:

1. Compile IMB-MPI1 with data-check enabled (-DCHECK)

2. Create a msg_lengths file (for L in `seq 0 100`; do echo $L >> msg_len.txt; done)

3. Run with your favorite MPI implementation using two processes, the simples possible way, with the following arguments to IMB-MPI1: 

   -msglen msg_len.txt -iter 1 Exchange

and terrible things happens.

For example, with Open MPI and the command line:

Linking with mpiicc (impi 5.0.1)

Hello,
I have been trying to configure (and compile) the PETSc library with impi 5.0.1 (and ifort Version 15.0.0.090 Build 20140723) using mpiicc script for C compilation. However, the configure process fails with the error "...compiler mpiicc is broken! It is returning a zero error when the linking failed..."

I think that there might be an issue with the following code snippet located at the end of the mpiicc script:

mpiexec hang after program exit

Out product includes GUI and engine parts. Intel MPI  4.1 was used in engine code. GUI will can engine through mpiexec. Everything works fine on Windows 7 and Windows server 2008. When we run the product on Windows 8 and Windows server 2012, we met some problem. 

The code to start engine looks like CreateProcess( NULL, "mpiexec - n 2 engine", NULL, ........) . After engine exit, mpiexec still hang in memory and cannot exit. This will cause GUI hang. If we run the engine form command, mpiexec can exit after engine exit. 

Check Intel® Xeon Phi™ Coprocessors with Intel® Cluster Checker 2.2

Intel® Cluster Checker tool evaluates HPC clusters for consistency, functionality and performance. This includes capability for evaluating the hardware configuration of Intel® Xeon Phi™ coprocessors.

This module will describe the use of Intel® Cluster Checker to:

  • Intel® Cluster Checker
  • Intel® Cluster Ready
  • Intel® MPSS
  • Coprocessor
  • Configuration
  • Cluster Computing
  • Intel® Many Integrated Core Architecture
  • Assigning physical hard drives to Intel® Xeon Phi™ coprocessors

    Intel® Xeon Phi™ coprocessors are able to directly mount and use block devices that are attached to the host system. This article provides basic instructions for formatting and mounting a hard disk drive (HDD) or solid state drive (SSD) natively on the coprocessor. Benefits include direct, dedicated, and persistent storage for individual coprocessors. A large cluster could use these to store additional software packages and data, providing more available memory without impacting network traffic. In addition, this method supports creation of swap partitions.

  • Intel® Cluster Ready
  • Intel® Many Integrated Core Architecture
  • Intel® Cluster Checker 2 Custom Tests

    Intel® Cluster Checker 2.x includes the ability to extend its capabilities, allowing the user to create additional custom checks with the <generic> test module. The <generic> test module can evaluate:

  • Intel® Cluster Checker
  • Intel® Cluster Ready
  • Configuration Generic Module Examples
  • Cluster Computing
  • Intel® Many Integrated Core Architecture
  • Tuning Intel MPI for Phi

    Does setting

        I_MPI_MIC=enable

    change other MPI environment variables, particularly any that would tune MPI for the MIC system architecture?  

    As a side question, has anyone written a Tuning and Tweaking guide for IMPI for Phi?  For example, what I_MPI variables could one use to help tune an app targeting 480 ranks across 8 Phis?

    Thanks

    Ron

    IFORT constant numbers

    Hello,

    I have frequently heard that in C++ HPC xeon phi applications, it is beneficial to declare variables as const, if possible. However, I cannot seem to ascertain whether or not this is possible in Fortran. Is there a way to do this type of optimization using ifort

     

     

    Fatal error in MPI_Init: Other MPI error, error stack: MPIR_Init_thread(264): Initialization failed

    Hello,

    I am running Intel MPI for Intel mp_linpack benchmark (xhpl_em64t).

    Steps:

    1. I sourced the mpivars.sh from /opt/intel/impi/bin64/mpivars.sh

    2. I did "mpdboot -f hostfile"

    $ cat hostfile
    node 1
    node 2

    3. I did "mpirun -f hostfile -ppn 1 -np 2 ./xhpl_em64t"

    After step 3, errors occured. Below is the error message with I_MPI_DEBUG=50

    Subscribe to Intel® Cluster Ready