Computação paralela

Problem in including third party headers

I am new to Xeon Phi and experimenting with the offload programming mode in a simple main.cpp file. Eventually I want to use the offload mode in connection to our astrodynamical routines (I work at the European Space Agency)

I get the following error

catastrophic error: *MIC* cannot open source file "pagmo/src/keplerian_toolbox/lambert.h"

The file lambert.h (which exists on the host but not on the mic) is indeed included in main.cpp but its content is not used inside the offload directive.

NFS over RDMA

Hi all,

 

Have you had an experience with mounting a directory over NFS using RDMA within MIC? I see NFSv4 supports this feature and this can be very useful in some cases. If you know that it is not possible to use NFS over RDMA with the current MPSS, please let me know.

 

Thanks,

Taras

compiler bug with #pragma pack ?

I am getting a segmentation fault in the following program, compiled with:

icc -mmic -O0 test.c

and executed natively on Xeon Phi. If the #pragma is removed it runs fine. It appears that the compiler (14.0.1) obeys the #pragma but forgets about it later...

#pragma pack(1)

struct test
{
char c;
double d;
};

int main(void)
{
struct test t;
t.d = 0; // segmentation fault here
return 0;
}

Tutorial quality feedback

I’m new to the whole MIC thing, and I just wanted to leave a comment regarding the quality of LEO_tutorial which I came accross, since Intel doesn’t seem to have a public bug tracker (something every modern company should have). At any rate, in the tutorial, I see code like this:

 // Gather odd numbered values into O_vals and count in numOs numOs = 0; for (k = 0; k < MAXSZ; k++) { if ( all_Vals[k]%2 != 0 ) { O_vals[numOs] = all_Vals[k]; numOs++; } } 

OpenCL and Bandwidth

I'm trying to get maximum/high memory bandwidth with a Stream like benchmark based on OpenCL. The maximum performance I am able to achieve seems to be about 35GB/s. With the same benchmark on Nvidia Titan and AMD W9000 I get close to the peak performance.

Has anybody implemented a steam like benchmark for Intel MIC using OpenCL and sees good performance?

Thanks, Sebastian

Cluster configuration with Intel Xeon Phi

Hi all

We have been using Intel Xeon Phi to run MPI applications in symmetric mode (phi card + processor host).

As we have two hosts with a Xeon phi card each, we want to create a cluster to execute MPI applications using both cards and both hosts:

HOSTNAME IP_HOSTNAME MICNAME IP_MIC
mlxm1 192.168.0.14 mlxm1-mic0 172.31.1.1
mlxm2 192.168.0.15 mlxm2-mic0 172.31.1.1

ICL emits warning #809 when using defaulted virtual destructors

The following code:

struct B
{
    B () = default; 
    virtual ~B () = default;
}; 
 
struct D : public B
{
    virtual ~D () = default;
};

generates this warning:

warning #809: exception specification for virtual function "D::~D" is incompatible with that of overridden function "B::~B"

when compiled with: icl /c /Qstd=c++11

I believe this warning is in error since the defaulted destructors should have the same exception specification.

IMB 4.0 bug report with RMA side

Hi Sir/Madam,

When I was running the latest IMB 4.0 which supports MPI-3 RMA, I found an issue with the benchmark. The benchmark didn't call MPI_Win_free to free the window when the benchmark exits. The following code is where MPI_Win_free is called inside IMB 4.0. It is called only for IMB-EXT, not for IMB-RMA. It seems to be a bug in the benchmark. Could you give me some feedbacks about this?

 

#ifdef EXT

    if( c_info->WIN != MPI_WIN_NULL )
        MPI_Win_free(&c_info->WIN);

#endif /*EXT*/

Ming

Assine o Computação paralela