Intel® Cluster Ready

Tracing MPI call with Intel MPI

Hi,

Is there a way to trace MPI calls (meaning: each time an MPI function is called, I'd like to see the function name, parameters and emitting process) using Intel MPI ?

There is a --trace option, but it seems connected with Trace Analyzer which only seems able to print information regarding cumulated time spent in a given function ?

I've been trying to read the documentation for some time now, and it does not seems to be supported, which seems kind of strange...

 

Regards,

Can shared memory work between processes running w/wo mpiexec?

I have a GUI.exe (with GUI) and engine.exe (without GUI). I am using shared memory for inter-process communication between these two exe. Everything was working fine before I use mpiexec. After I added mpiexec for engine.exe, these two processes cannot talk to each other through shared memory. It seems that the shared memory is "shield" by mpiexe and cannot be shared by outside world, since mpiexec itself is also using shared memory for communication for parallel computations.

By the way, these two exe run on the same PC.

Is there any way to overcome this problem?

qdel not killing all processes started under Intel MPI

Hi, when we run using Intel MPI with Hydra process manager (in a script submitted with qsub-- this is with OGS/GE 2011.11p1 on ROCKS 6.1 on a small blade cluster), qdel does not fully kill the job except when the the jobscript runs on the frontend. I have to kill the processes started by mpirun manually if the jobscript runs on a compute node. This is not a problem with OpenMPI.

Any ideas or suggestions on how to proceed with troubleshooting this would be much appreciated.
Thanks,
Noah

Using Intel® MPI Library 5.0 with MPICH based applications

Why it is needed?

Different MPI implementations have their specific benefits and advantages. So in the specific cluster environment the HPC application with the other MPI implementation can probably perform better.

 Intel® MPI Library has the following benefits:

  • Developers
  • Partners
  • Professors
  • Students
  • Linux*
  • Server
  • Advanced
  • Beginner
  • Intermediate
  • Intel® Cluster Toolkit
  • Intel® Trace Analyzer and Collector
  • Intel® MPI Library
  • Intel® Cluster Studio
  • Intel® Cluster Studio XE
  • Intel® Cluster Ready
  • Message Passing Interface
  • Cluster Computing
  • Development Tools
  • MPI Rank Binding

    Hello all,

    Intel MPI 4.1.3 on RHEL6.4: trying to bind ranks in two simple fashions:(a) 2 ranks to the same processor socket and (b) 2 ranks to different processor sockets.

    Looking at the Intel MPI Reference Manual (3.2. Process Pinning pp.98+), we should be able to use options in mpiexec.hydra when the hostfile points to the same host

    -genv I_MPI_PIN 1  -genv I_MPI_PIN_PROCESSOR_LIST all:bunch
    -genv I_MPI_PIN 1  -genv I_MPI_PIN_PROCESSOR_LIST all:scatter

     

    Intel MPI Using

    Hello everyone

    First i have to provide this information:

    1- i have instaled the latest version of Intel MPI.

    2- i have to use it through Ansys HFSS 15 x64 which is a EM-software.

    3- HFSS dont have any problem with discrit processes (for ex. 15 paralel process will share trough 3 computer on network correctly)

    4- i need to use memory of other computers on the network, so need distribute the RAM usage.

    5- the error i got every time is "authentication faild" or "unable to create child process in hosts" (or somthing like these)

    INTEL-MPI-5.0: -prepend-rank on the mpirun command line does not work

    Dear developers of Intel-MPI,

    I found, that the helpful option   -prepend-rank   does not work when launching  a parallelized Ftn-code with mpirun when using INTEL MPI-5.0 :

           mpirun -binding -prepend-rank -ordered-output -np 4 ./a.out

    The option actually has no effect with INTEL MPI-5.0 (with INTEL MPI-4.1 it worked). No rank-numbers are prepended on the display to the output lines of the program.

    Subscribe to Intel® Cluster Ready