Intel® Clusters and HPC Technology

IMB 4.0 bug report with RMA side

Hi Sir/Madam,

When I was running the latest IMB 4.0 which supports MPI-3 RMA, I found an issue with the benchmark. The benchmark didn't call MPI_Win_free to free the window when the benchmark exits. The following code is where MPI_Win_free is called inside IMB 4.0. It is called only for IMB-EXT, not for IMB-RMA. It seems to be a bug in the benchmark. Could you give me some feedbacks about this?

 

#ifdef EXT

    if( c_info->WIN != MPI_WIN_NULL )
        MPI_Win_free(&c_info->WIN);

#endif /*EXT*/

Ming

about MPI benchmark 4.0 tests

Hi, Dear Sir or Madam,

I am testing your MPI benchmark 4.0 on my computer. but the following error occurred when building IMB-IO.EXE,

1 IntelliSense: expected an identifier d:\dingjun\openmp_mpi_hybrid\imb-4.0-beta\imb\4.0.0\src\imb_benchmark.h 127 2 IMB-IO
 2 IntelliSense: expected an identifier d:\dingjun\openmp_mpi_hybrid\imb-4.0-beta\imb\4.0.0\src\imb_benchmark.h 128 2 IMB-IO

the corresponding source codes are follows:

#ifdef MPIIO

    {

       MPI_Offset Offset;

       switch(fpos)

       {

Intel Trace Collection / Analysis including user functions and H/W counters

Hello,

we are trying to collect/analyze traces from an IntelMPI (4.1.1X) based application and we would also like to have basic performance information collected for user functions and if possible use H/W perf counters.

So alongisde the metrics collected for MPI APIs also collect statistics on times spent on user functions and if possible collect H/W perfomance counter values to accompany the trace.

Can we also filter which user functions /regions we will have traces collected as the generated trace files become unwieldy...

thanks

Michael

Intel MPI Benchmarks Reported Numbers Clarifications

I have a simple question: does the latest Intel MPI Benchmarks report bandwidth in powers of 10, such MegBytes / sec (or 106) or in powers of 2, such as, MegaBinaryBytes / sec  (or 220) ?

We need to know actually how close the attained MPI bit rates gets to the phy link's bit rate.

Thanks

Michael

I_MPI_PERHOST variable not working with IntelMPI v4.1.0.030

Setting the I_MPI_PERHOST environment does not produce the expected behavior with codes built with  IntelMPI v4.1.0.030, while codes built with  IntelMPI v4.1.0.024 do.  See below for a description of the problem.  System OS is REDHAT linux v6.3.



Normal
0




false
false
false

EN-US
X-NONE
X-NONE



























tmi

Hello everyone,

I am trying to use tmi fabric for intel mpi library but when i run my application with dynamic process management using MPI_Comm_spawn, application fails to run, but if i run without any I_MPI_FABRICS arguments then it works fine. Could someone please suggest what I might be doing wrong? Please see the rows that have been marked with "-->" are debugging statements in my program.

//***************************

/opt/intel/impi/4.1.1.036/intel64/bin/mpirun -n 1 -perhost 1 -f ./mpd.hosts -env I_MPI_DEBUG 2 -env I_MPI_FABRICS shm:tmi ./parent

MPI run crashes on more than one node

Hi everyone,

I'm using MPICH2 v1.5 to run my WRF model on INTEL Xeon Processors. I can run on one node with as many cores as I want but if it exceeds the number of porcessors in a core it will crash with following error:

*********************************************************************************************************************************************

Suscribirse a Intel® Clusters and HPC Technology