Intel® Clusters and HPC Technology

Source location in Trace Analyzer in applications statically linked with Intel® MPI Library

I want to perform analysis of the application which is compiled with the following command line:

$ mpiicc -static_mpi -trace -g myApp.c -o myApp
$ mpirun -n 4 ./myApp

Additionally I record the location of my function calls by setting the environment variable VT_PCTRACE with the following command

cannot open source file "mpi.h"

 

Dear all

I am trying to run this make file,  but I am having this error "catastrophic error: cannot open source file "mpi.h"
  #include <mpi.h>. I am sure I have a problem with the make file, but knowledge in linux is low. Thanks in advance for your help

export HDF5_INC_DIR=$/usr/local/hdf5/include

export HDF5_LIB_DIR=$/usr/local/hdf5/lib

export NETCDF_INC_DIR=$/usr/local/netcdf4/include

export NETCDF_LIB_DIR=$/usr/local/netcdf4/lib /usr/local/netcdf4-fortran/lib

export $MPI_INC_DIR=$/opt/intel/impi/5.1.1.109/bin64

Redistributable info, missing component

Hi,

I'm looking at including the intel mpi as part of our software package, this way the end-user do not have to install the MPI components on his system.
We will of course include this in the 3rd party EULA of our software.

However:

  - The "redist.txt" file of the intel MPI folder list the files which are OK to include in our package.  But the file bin/pmi_proxy.exe seems missing from the list.  It is required to run local computations (-localonly)

Unusual requirement?

Dear All,

We are involved in setting up an HPC cluster with about 25 Dell PowerEdge 720 servers, each equipped with 172 GB of RAM and 24 Intel cores running at 2,4 GHz). Every node is connected to a Gigabit Ethernet switch and to a 56 Gbps Mellanox Infiniband switch that provides storage access.

IntelMPI over Mellanox 10GB RDMA chipset

Hi

What is the best DAPL provider to used with IntelMPI 5.1  when running with a 10Gb RDMA chipsset from Mellanox?

Currently I'm using I_MPI_DAPL_PROVIDER=ofa-v2-scm-roe-mlx4_0-1

I added some others extra parameters to workaround the resource limitation that we have on the chip

  export I_MPI_DAPL_RDMA_RNDV_WRITE=on

   export I_MPI_RDMA_MAX_MSG_SIZE=1048576

 

Thanks

Thierry

 

MPI_Abort fails with TMI fabric

I am finding that MPI_Abort is not killing processes on remote nodes when I_MPI_FABRICS=shm:tmi. The attached PBS job works correctly when the default fabric  (shm:dapl) is used, but fails to abort cleanly when shm:tmi is used. Any help with overcoming this problem would be much appreciated.
 

How to run application without mpiexec?

Greetings forum,

I would like to manually start my application (paraview's pvserver) on each node of a windows cluster without using mpiexec. The reason is that the application needs to make visible windows so that it can drive a CAVE display system. mpiexec will not work for me, because it uses smpd, which is a service, and since Vista windows services can not interact with the user. (Alternatively, is there a way to run smpd.exe as a startup application instead of a service?)

a cluster of two virtual machines cannot be created,it said cannot access "my another node name" using ssh command

I want to create a cluster of two virtual machines. I have followed the file "parallel_studio_xe_2015_update3/doc/Install_Guide.htm#prerequisites". However, when I get the  eighth step in Prerequisites,it occurs this situation:

Yes,cws02 is the master node ,cws01 is the other node.It requires me to input my current virtual machines's key word,but after i enter that ,it stayed there so long and never changed.I can't understand.

Подписаться на Intel® Clusters and HPC Technology