Intel® Cluster Studio XE

Problem on MPI: About Non-Blocking Collective operations

 

The structure of my code is,

//part1
if(i>1){
          Compute1;
        }
//part2
if(i<m)
      {
           Compute2;
         MPI_Allgatherv();  //Replaced by MPI_Iallgatherv();
       }
//part3
if(i>0)
     {
         Compute3;
         MPI_Allreduce();
     }
part4
if(i<m){
         Compute4;
         }

Collective operations in part 2 is the bottleneck of this program.

Improving MPI Communication between the Intel® Xeon® Host and Intel® Xeon Phi™

MPI Symmetric Mode is widely used in systems equipped with Intel® Xeon Phi™ coprocessors. In a system where one or more coprocessors are installed on an Intel® Xeon® host, Transmission Control Protocol (TCP) is used for MPI messages sent between the host and coprocessors or between coprocessors on that same host.  For some critical applications this MPI communication may not be fast enough.

windows authentication via Infiniband

Hello everyone,
I need your help with a problem of windows authentication .
I have changed the method of authentication in "delegation", but it still does not work, and a password is always required.
between master nodes and computes nodes, there are two types of networks, Gigabit LAN (seen by all the AD domain) and Infiniband (seen by masters and compute nodes). the scheduler sending all jobs via infiniband, is that it has an impact on the authentication method? if so, how can I bypass this problem?

 

Intel mpi/openmp hybrid programming on clustering!

Hello, Admin!
I'm now using Intel Cluster Studio Tool Kit! And I'm trying to run hybrid(mpi+openmp) program on 25 compute nodes!I compile my program using with -mt_mpi -openmp. I use I_MPI_DOMAIN=omp OMP_NUM_THREADS=2 environment variables, that means for every process(mpi) will have 2 threads(openmp).  I can run my program without errors still using with 14 compute nodes! But beyond 14 compute nodes, error outputs is following!

Checkpointing MPI jobs with Intel MPI version 4.1.3.049

Trying to run checkpointing with BLCR using the Intel MPI 4.1.3.049 library. Compiled the source MPI codes using the Intel mpicc compiler. 

While running, used mpiexec.hydra -ckpoint on -ckpointlib blcr and other options. The checkpoints do get written, but the application crashes with a segfault after the first checkpoint itself (after having written a multi gigabyte checkpoint context file to disk) The applications run perfectly to completion when I run them without the checkpoint options. 

订阅 Intel® Cluster Studio XE