Intel® Cluster Ready

Intel MPI Library Troubleshooting Guide

The latest versions of the Intel MPI Library User's Guides have added an expanded Troubleshooting section.  It provides the following information:

  • General Intel MPI Library troubleshooting practices
  • Typical MPI failures with corresponding output messages and behavior when a failure occurs
  • Recommendations on potential root causes and solutions

Here are direct links to these new sections:

How to profilie wrf???

Hi 

I Choi w.
I'm running a wrf.
I wanted to WRF run faster.
So I try to profile.
What do you have to work?

I want to use this option.
Where do you need to add in configure.wrf?

---------------------------------------------------------------------------

-profile-functions -profile-loops=all -profile-loops-report=2

---------------------------------------------------------------------------

Are there ever any other way?
If you have taught me.

Thank you.

Compiling Intel Linpack MP Benchmark mpiicc error

Hello!  I am new to Intel Linpack Benchmark.  I downloaded the evaluation version of Intel Parallel Studio XE Cluster Edition.  I followed the directions and was able to run the single system benchmark and got about 340Gflops.  When I tried to compile the mp_linpack... I get some error with not finding mpiicc.  I did find mpiicc in I believe /opt/intel/ directory and I added the directory to $PATH.  But when compiling, it shows error mpiicc not found...  I changed the Make file to use mpicc and it compiled... but I am trying to use Intel mpiicc to get the best result.  

Building netcdf-4.3.3.1 with Intel MPI library with parallel support : FAIL: run_par_test.sh

Dear Support

I am trying to build netcdf-4.3.3.1 with parallel support using Intel MPI Library-4.1  in order to build RegCM-4.4.5.5.

I have used following environment variables before running the configure command:

export CC=mpiicc

export CXX=mpiicpc

export CFLAGS=' -03 -xHost -ip -no-prec-div -static-intel'

export CXXFLAGS=' -03 -xHost -ip -no-prec-div -static-intel'

export F77=mpiifort

export FC=mpiifort

How to use the Intel® Cluster Checker v3 SDK with gcc5

When compiling connector extensions in the Intel® Cluster Checker v3 SDK it is recommended to use an Intel compiler version 15.0 or newer and a gcc/g++ compiler version 4.9.0 or newer, as described in the Intel® Cluster Checker developer's Guide. This explicitly includes gcc version 5.1.0 and newer as well.

  • Parceiros
  • Linux*
  • Servidor
  • Intel® Cluster Checker
  • Intel Cluster Ready
  • Intel Cluster Checker
  • Intel Cluster Checker v3
  • Intel Cluster Checker v3.0
  • Cluster Checker
  • Intel® Cluster Ready
  • Computação de cluster
  • Bizarre authenticity of host issue when running across multiple nodes with Intel MPI

    I am attempting to run a job across three nodes.  I have configured passwordless ssh and it definitely works in between every node (each node can ssh to the other two without a password).  The known_hosts file is definitely correct and all 3 nodes have identical .ssh directories.  I have also tried adding the keys to ssh-agent, although I'm not sure if that was necessary either as I didn't specify a pass phrase when generating the id_rsa key (I know this is terrible security but it's temporary for the sake of testing).

    How to use the Intel® Cluster Checker v3 SDK on a cluster using multiple Linux Distributions

    Linux based HPC clusters can use different Linux distributions or different versions of a given Linux distribution for different types of nodes in the HPC cluster.

    When the Linux distribution on which the connector extension has been built uses a glibc version 2.14 or newer and the Linux distribution where the connector extension is used, i.e. where clck-analyze is executed, uses a glibc version lower than 2.14, clck-analyze is not able to execute the shared library of the connector extension due to a missing symbol.

    clck-analyze will show a message like this:

  • Parceiros
  • Linux*
  • Servidor
  • Intel® Cluster Checker
  • Intel Cluster Ready
  • Intel Cluster Checker
  • Intel Cluster Checker v3
  • Intel Cluster Checker v3.0
  • Cluster Checker
  • Intel® Cluster Ready
  • Computação de cluster
  • Computação paralela
  • Unable to launch MPI on Windows 2012

    I have been using Intel MPI library on a Windows 2008 R1 server for few years without any issues. However, we recently switched to Windows Server 2012 R2, and now I can't launch MPI due to following error, when I use the command line -

    mpiexec -n 1 -localroot  ./eDR_IMC.exe

    " Error while connecting to host, No connection could be made because the target machine actively refused it. (10061)
    Connect on sock (host=eDR-IMC2, port=8676) failed, exhaused all end points
    Unable to connect to 'eDR-IMC2:8676',
    sock error: Error = -1"

    MPI command question

    Dear all,

    I use MPI to run program for the given work path. The program works when I run the following command:

    mpiexec -wdir "Z:\test" -host 1 n01 1 z:\fem

    However, when running the following command:

    mpiexec -wdir "Z:\test" -n 1 z:\fem

    The program displayed the following error:

    forrtl: severe (29): file not found, unit 1, file C:\Windows\system32\parainfo\control.dat

    The 'control.dat' is placed at z:\test\parainfo, and the program will search for 'control.dat' from 'parainfo' automatically.

    Assine o Intel® Cluster Ready