Intel® Cluster Ready

An Overview of the Storage Plugin for Intel® Enterprise Edition for Lustre* Software

In today’s data centers, big data has placed a greater demand for performance at large scale for storage systems. The Lustre* file system was purpose-built to provide sustained storage performance and stability on a large scale for high-performance computing (HPC) systems.  However, as commercial  enterprises have begun to deploy HPC technologies, Intel created an enhanced version of Lustre known as Intel® Enterprise Edition for Lustre* software. Enterprise Edition enables fully parallel I/O throughput across clients, servers, and storage devices.

  • Developers
  • Linux*
  • Python*
  • Advanced
  • Intermediate
  • Lustre
  • Xeon
  • cloud
  • Storage
  • server
  • Intel® Cluster Ready
  • Cluster Computing
  • Problem on MPI: About Non-Blocking Collective operations


    The structure of my code is,

             MPI_Allgatherv();  //Replaced by MPI_Iallgatherv();

    Collective operations in part 2 is the bottleneck of this program.

    windows authentication via Infiniband

    Hello everyone,
    I need your help with a problem of windows authentication .
    I have changed the method of authentication in "delegation", but it still does not work, and a password is always required.
    between master nodes and computes nodes, there are two types of networks, Gigabit LAN (seen by all the AD domain) and Infiniband (seen by masters and compute nodes). the scheduler sending all jobs via infiniband, is that it has an impact on the authentication method? if so, how can I bypass this problem?


    Intel mpi/openmp hybrid programming on clustering!

    Hello, Admin!
    I'm now using Intel Cluster Studio Tool Kit! And I'm trying to run hybrid(mpi+openmp) program on 25 compute nodes!I compile my program using with -mt_mpi -openmp. I use I_MPI_DOMAIN=omp OMP_NUM_THREADS=2 environment variables, that means for every process(mpi) will have 2 threads(openmp).  I can run my program without errors still using with 14 compute nodes! But beyond 14 compute nodes, error outputs is following!

    Checkpointing MPI jobs with Intel MPI version

    Trying to run checkpointing with BLCR using the Intel MPI library. Compiled the source MPI codes using the Intel mpicc compiler. 

    While running, used mpiexec.hydra -ckpoint on -ckpointlib blcr and other options. The checkpoints do get written, but the application crashes with a segfault after the first checkpoint itself (after having written a multi gigabyte checkpoint context file to disk) The applications run perfectly to completion when I run them without the checkpoint options. 

    Run and debug Intel MPI application on cluster. (Grid Engine)


    I have a problem in debugging of Intel MPI application on a cluster.

    Common question: How to debug the parallel application. Console debugger idbc is not convenient at all. Are there debuggers with GUI? better for free.

    I tried using eclipse, I launched program using SGE script (below), but I cannot debug with the same way.

    The script for RUN.
    ############## RUN ##############
    #$ -S /bin/bash


    . /etc/profile.d/

    Ordering of images on different nodes using Coarray Fortran and IntelMPI


    I have a question about ordering of images when -coarray=distributed compiler option is used and the program is run on a cluster using IntelMPI libraries.

    Assuming that the number of images is the same as the number of CPUs, are the images running on CPUs within the same node indexed by consecutive numbers?

    Subscribe to Intel® Cluster Ready