Problem in including third party headers

I am new to Xeon Phi and experimenting with the offload programming mode in a simple main.cpp file. Eventually I want to use the offload mode in connection to our astrodynamical routines (I work at the European Space Agency)

I get the following error

catastrophic error: *MIC* cannot open source file "pagmo/src/keplerian_toolbox/lambert.h"

The file lambert.h (which exists on the host but not on the mic) is indeed included in main.cpp but its content is not used inside the offload directive.


Hi all,


Have you had an experience with mounting a directory over NFS using RDMA within MIC? I see NFSv4 supports this feature and this can be very useful in some cases. If you know that it is not possible to use NFS over RDMA with the current MPSS, please let me know.




compiler bug with #pragma pack ?

I am getting a segmentation fault in the following program, compiled with:

icc -mmic -O0 test.c

and executed natively on Xeon Phi. If the #pragma is removed it runs fine. It appears that the compiler (14.0.1) obeys the #pragma but forgets about it later...

#pragma pack(1)

struct test
char c;
double d;

int main(void)
struct test t;
t.d = 0; // segmentation fault here
return 0;

Tutorial quality feedback

I’m new to the whole MIC thing, and I just wanted to leave a comment regarding the quality of LEO_tutorial which I came accross, since Intel doesn’t seem to have a public bug tracker (something every modern company should have). At any rate, in the tutorial, I see code like this:

 // Gather odd numbered values into O_vals and count in numOs numOs = 0; for (k = 0; k < MAXSZ; k++) { if ( all_Vals[k]%2 != 0 ) { O_vals[numOs] = all_Vals[k]; numOs++; } } 

OpenCL and Bandwidth

I'm trying to get maximum/high memory bandwidth with a Stream like benchmark based on OpenCL. The maximum performance I am able to achieve seems to be about 35GB/s. With the same benchmark on Nvidia Titan and AMD W9000 I get close to the peak performance.

Has anybody implemented a steam like benchmark for Intel MIC using OpenCL and sees good performance?

Thanks, Sebastian

Cluster configuration with Intel Xeon Phi

Hi all

We have been using Intel Xeon Phi to run MPI applications in symmetric mode (phi card + processor host).

As we have two hosts with a Xeon phi card each, we want to create a cluster to execute MPI applications using both cards and both hosts:

mlxm1 mlxm1-mic0
mlxm2 mlxm2-mic0

How to configure the Performance Monitoring Units (PMUs)


Recently I meet the problem about how to configure the PMUs on Xeon Phi.

According to the document named "Intel® Xeon Phi™ Coprocessor (codename: Knights Corner) Performance Monitoring Units", it requires a configuration tool that has "Ring 0" access to kernel to configure the PMUs on Xeon Phi. VTune Amplifier is able to use these PMUs. However, as we want to have some fine-grained control on the codes that we want to profile, instead of using VTune directly, we would like to collect the PMU data with our own codes that only require "Ring 3" access.

Compile MPSS on SL 6.4 with realtime kernel - 3.8.13-rt14.25

Hello, I would like to compile MPSS on Scientific Linux 6.4 with MRG realtime (Kernel: 3.8.13-rt14.25.el6rt.x86_64). I tried to perform the instructions in the user guide in section 8.1 ( - However already the command: rpmbuild --rebuild \ mpss-modules-*.el6.src.rpm fails since there isn't any rpm in the src folder. I would assume that the instructions assume that I have already a MPSS installed.

S’abonner à Professeurs