Mensaje pasa a interfaz

How to increase performance of MPI one-sided performance with Intel MPI?

Hello,

we have an application with basically two (last) sequence of actions in the domain decomposition:

one set of tasks (subset a) calls

call mpi_win_lock(some_rank_from_subset_b)
call mpi_win_get(some_rank_from_subset_b)
call mpi_win_unlock(some_rank_from_subset_b)

the others (subset b) are stuck in the MPI_Barrier at the end of the domain decomposition. This performs nicely (passes domain decomposition within seconds) with MVAPICH on our new Intel Xeon machine and on another machine with IBM BlueGene/Q hardware.

Intel MPI suddenly exited

Hello,

I installed intel MPI on windows 7 x64 and executed "> mpiexec -n 4 program.exe", it seemed to be running fine for about 30h and was using the appropriate resources expected. However, the process suddenly exited with about 10h remaining in the computation with the following error stack:

"

[mpiexec@Simulation-PC] ..\hydra\pm\pmiserv_cb.c (773): connection to proxy 0 at host Simulation-PC failed

MPI_COMM_SPAWN crashing

I have two fortran mpi programs (driver.f90 and hello.f90, both attached here) 

driver.f90 contains a call to MPI_COMM_SPAWN which calls hello.x

When I run it using  the command "mpirun -np 2 ./driver.x" It crashes (output below this message). I noticed that spawned task 0 has a different parent communicator than the other tasks. I imagine that is the cause of the segmentation fault. It seems a very simple mpi program, both OpenMPI and MPICH work fine. Does anybody know what the problem might be

I'm using impi/5.0.3.048 and ifort 15.0.3.

 

Thanks,

nested mpirun commands

Hi,

   I have an mpi program that calls another mpi program (written by someone else) using a fortran system call

driver.f90

 

call MPI_INIT(..)

system('mpirun -np 2 ./mpi_prog.x')

call MPI_FINALIZE(...)

 

When I run it (e.g. mpirun -np 4 ./driver.x) it crashes inside mpi_prog.x (at a barrier). When I build it using mpich it works fine though. Any hints on what might be wrong (I realize nested mpirun's are completely beyond the mpi standard and highly dependent on implementation)

 

NOTE: when I do something like:

Intel® Parallel Studio XE 2015 Update 4 Cluster Edition Readme

The Intel® Parallel Studio XE 2015 Update 4 Cluster Edition for Windows* combines all Intel® Parallel Studio XE and Intel® Cluster Tools into a single package. This multi-component software toolkit contains the core libraries and tools to efficiently develop, optimize, run, and distribute parallel applications for clusters with Intel processors.  This package is for cluster users who develop on and build for IA-32 and Intel® 64 architectures on Windows*. It contains:

  • Desarrolladores
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8.x
  • C/C++
  • Fortran
  • Intel® Parallel Studio XE Cluster Edition
  • Mensaje pasa a interfaz
  • Computación con clústeres
  • Error while building NAS benchmarks using Intel MPI

    I am trying to build NAS benchmarks using Intel MPI and below is the makefile that I am using.

     

           #---------------------------------------------------------------------------

        #

        #                SITE- AND/OR PLATFORM-SPECIFIC DEFINITIONS. 

        #

        #---------------------------------------------------------------------------

        

        #---------------------------------------------------------------------------

    GROMACS recipe for symmetric Intel® MPI using PME workloads

    Objectives

    This package (scripts with instructions) delivers a build and run environment for symmetric Intel® MPI runs. This file is actually the README of the package. Symmetric stands for employing a Xeon® executable and a Xeon Phi™ executable both running together exchanging MPI messages and collective data via Intel MPI.

  • Desarrolladores
  • Socios
  • Estudiantes
  • Linux*
  • Servidor
  • C/C++
  • Intermedio
  • Intel® Parallel Studio XE Cluster Edition
  • symmetric MPI
  • native MPI
  • cmake
  • heterogeneous clusters
  • Intel® Many Integrated Core (Intel® MIC) Architecture
  • Mensaje pasa a interfaz
  • OpenMP*
  • Académico
  • Computación con clústeres
  • Procesadores Intel® Core™
  • Arquitectura Intel® para muchos núcleos integrados
  • Optimización
  • Computación en paralelo
  • Migración
  • Subprocesos
  • Suscribirse a Mensaje pasa a interfaz