What collateral/documentation do you want to see?

Do you have questions that you are not finding the answers for in our documentation?  Need more training, source code examples, on what specifically?   Help us understand what's missing so that we can make sure we develop documentation you care about (what is important, and what is nice to have)!   Thank you

FAQS: Compilers, Libraries, Performance, Profiling and Optimization.

In the period prior to the launch of Intel® Xeon Phi™ coprocessor, Intel collected questions from developers who had been involved in pilot testing. This document contains some of the most common questions asked. Additional information and Best-Known-Methods for the Intel Xeon Phi coprocessor can be found here.

The Intel® Compiler reference guides can be found at:

Links to instruction documentation

Intel® Memory Protection Extensions Enabling Guide

Abstract: This document describes Intel® Memory Protection Extensions (Intel® MPX), its motivation, and programming model. It also describes the enabling requirements and the current status of enabling in the supported OSs: Linux* and Windows* and compilers: Intel® C++ Compiler, GCC, and Visual Studio*.  Finally, the paper describes how ISVs can incrementally enable bounds checking in their Intel MPX applications.


  • Entwickler
  • Linux*
  • Microsoft Windows* 8.x
  • Server
  • C/C++
  • Fortgeschrittene
  • Intel® C++-Compiler
  • Intel® Memory Protection Extensions
  • Intel® MPX
  • MPI related error in offload

    Hi All,

    I am trying to offload a do loop onto the intel MIC.

    for every function call at its definition I added this.

    !dir$ attributes offload:mic :: <function name>

    The function turned out to be nested i.e each function had multiple function call within it.

    Continuing I have reached a point where I got this error and I have no idea how to solve it.

    Compiling CESM for intel MIC in offload mode

    Hi All,

    I am trying to run CESM(community earth system model written in Fortran) with offload sections for Intel Xeon Phi. I am trying to do this for an OpenMP loop present in a section called baroclinic function. While doing this, I get the following error during compilation of the offload code. 

    A procedure called by a procedure with the OFFLOAD:TARGET attribute must have the OFFLOAD:TARGET attribute.

    Why GotoBlas has so low efficiency(where is wrong for my steps)?

     I use GotoBlas and mpich to run hpl in the cluster(the cpu is Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz). I use two ways to compile GotoBlas:(1)make (2)make USE_THREAD=0 TARGET=NEHALEM. The library used in the makefile of hpl is libgoto.a. However, the two different ways of compiling GotoBlas all leads to a low efficiency of HPL results: only 150GFlops(the theorical peak is 330 GFlops). Do I have some mistakes in compiling GotoBlas? Thanks for your answer.

    Establish XDB connection on non-shared JTag design

    The 60 Pin Debug Port (XDP) Specification Document (DPS) specifies the open chassis Platform requirements to implement a 60 Pin XDP connector to use PHG XDP debug tools and third party vendors that support the 60 Pin XDP interface.

    Since Skylake platform, internal boards that will be used as a reference to external customers are required to implement 1 X Merged 60 Pin XDP Connector.

  • Entwickler
  • Partner
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 10
  • Microsoft Windows* 8.x
  • Server
  • Windows*
  • Experten
  • xdb
  • Debugging
  • Microsoft Windows* 8 Desktop
  • Problem when running the intel optimized MP LINPACK Benchmark

    Hi, I'm running  intel optimized MP LINPACK Benchmark on one node with two Intel Xeon Phi coprocessers. I use "make arch=intel64 verson=offload" to compile the code, and in bin/intel64 I use "./xhpl" to run, and it has no problem. Then I change P,Q from 1,1 to 1,16 because I have 16 cores on the node, and I change N from 1000 to 30000. Then I use "mpirun -np 16 ./xhpl" to run xhpl, but it has an error and stopped, it shows:

    "Error in scif_send 104


    Server abonnieren