Intel® Cluster Studio XE

Can't install Parallel Studio XE 2015 Update1 on CentOS 6.6

Hi,

I'm trying to install (upgrading) Parallel Studio XE 2015 Update1. However, the installer causes segmentation fault. Do you know any solutions?

Target system has Xeon E5-2697 v2 and CentOS 6.6 is installed. Parallel Studio XE 2015 (Initial Release) is already installed without any errors.

Following text is all output string of "./install.sh". The line 582 contains only "fi".

MPI Library Runtime Enviroment 4.0

Hello,

I am working by using remote deskpot of Cornell University servers and I have not internet conection in my deskpot. I am using Visual Studios 2008 with Intel Visual Fortran Composer XE 2011, and supposedly it has already installed MPI Library Runtime Enviroment 4.0

I can´t find the files msmpi.lib or impi.lib, or the include path. Nevertheless, I found the folder with other files like mpichi2mpi.dll, impi.dll, impimt.dll,mpiexec.ex, wmpiexec.exe, etc. The package ID in the support file is w_mpi_rt_p_4.0.1.007 listed   

how to run the hybrid MPI/OpenMP

Hi, Dear Sir or Madam,

I am using the Intel MPI and OpenMP and Intel Composer XE 2015 to build a hybrid MPI/OpenMP application. For example, if I want to run the executable file of my application on 3 SMP computers with 3 MPI processes and each MPI process consists of 16 OpenMP threads.  Our PC cluster has 3 SMP nodes connected by the Infiniband and each node has 16 cores.  

Error message: control_cb (./pm/pmiserv/pmiserv_cb.c:1151): assert (!closed) failed

Hello, I have the following error message when I run my FORTRAN code on a HPC of my university:

[mpiexec@node0653] control_cb (./pm/pmiserv/pmiserv_cb.c:1151): assert
(!closed) failed

I had my code attached. I can successfully compile my codes in debug mode without any error. Besides, I have already unblocked the stack size of my machine by adding in command line "ulimit -a unlimited." 

Problem on MPI: About Non-Blocking Collective operations

 

The structure of my code is,

//part1
if(i>1){
          Compute1;
        }
//part2
if(i<m)
      {
           Compute2;
         MPI_Allgatherv();  //Replaced by MPI_Iallgatherv();
       }
//part3
if(i>0)
     {
         Compute3;
         MPI_Allreduce();
     }
part4
if(i<m){
         Compute4;
         }

Collective operations in part 2 is the bottleneck of this program.

Suscribirse a Intel® Cluster Studio XE