Intel® Cluster Ready

Problem on MPI: About Non-Blocking Collective operations

 

The structure of my code is,

//part1
if(i>1){
          Compute1;
        }
//part2
if(i<m)
      {
           Compute2;
         MPI_Allgatherv();  //Replaced by MPI_Iallgatherv();
       }
//part3
if(i>0)
     {
         Compute3;
         MPI_Allreduce();
     }
part4
if(i<m){
         Compute4;
         }

Collective operations in part 2 is the bottleneck of this program.

windows authentication via Infiniband

Hello everyone,
I need your help with a problem of windows authentication .
I have changed the method of authentication in "delegation", but it still does not work, and a password is always required.
between master nodes and computes nodes, there are two types of networks, Gigabit LAN (seen by all the AD domain) and Infiniband (seen by masters and compute nodes). the scheduler sending all jobs via infiniband, is that it has an impact on the authentication method? if so, how can I bypass this problem?

 

Intel mpi/openmp hybrid programming on clustering!

Hello, Admin!
I'm now using Intel Cluster Studio Tool Kit! And I'm trying to run hybrid(mpi+openmp) program on 25 compute nodes!I compile my program using with -mt_mpi -openmp. I use I_MPI_DOMAIN=omp OMP_NUM_THREADS=2 environment variables, that means for every process(mpi) will have 2 threads(openmp).  I can run my program without errors still using with 14 compute nodes! But beyond 14 compute nodes, error outputs is following!

Checkpointing MPI jobs with Intel MPI version 4.1.3.049

Trying to run checkpointing with BLCR using the Intel MPI 4.1.3.049 library. Compiled the source MPI codes using the Intel mpicc compiler. 

While running, used mpiexec.hydra -ckpoint on -ckpointlib blcr and other options. The checkpoints do get written, but the application crashes with a segfault after the first checkpoint itself (after having written a multi gigabyte checkpoint context file to disk) The applications run perfectly to completion when I run them without the checkpoint options. 

Run and debug Intel MPI application on cluster. (Grid Engine)

Hello,
 

I have a problem in debugging of Intel MPI application on a cluster.

Common question: How to debug the parallel application. Console debugger idbc is not convenient at all. Are there debuggers with GUI? better for free.

I tried using eclipse, I launched program using SGE script (below), but I cannot debug with the same way.

The script for RUN.
############## RUN ##############
#!/bin/bash
#$ -S /bin/bash

cd $SGE_O_WORKDIR

. /etc/profile.d/modules.sh

Ordering of images on different nodes using Coarray Fortran and IntelMPI

Hello

I have a question about ordering of images when -coarray=distributed compiler option is used and the program is run on a cluster using IntelMPI libraries.

Assuming that the number of images is the same as the number of CPUs, are the images running on CPUs within the same node indexed by consecutive numbers?

Problem with IntelIB-Basic, Intel MPI and I_MPI_FABRICS=tmi

Hi,

We have a small cluster (head node + 4 nodes with 16 cores) using Intel infiniband. This cluster is under CentOS 6.6 (with the kernel of CentOS 6.5).

On this cluster Intel Parallel Studio XE 2015 is installed. I_MPI_FABRICS is set per default to "tmi" only.

When I start a job (using torque+maui) on several nodes, for example this one:

Assine o Intel® Cluster Ready