Server

Building PAPI 3.5.2 and libpfm4 as native Phi libraries

Environment: 

RHEL 6u4 nodes with Intel Phi adapters. MPSS 3.2.3 installed on the hosts.

 

Has anyone built PAPI 5.3.2 to run on Intel Phis with MPSS 3.2.3? Are there steps documented somewhere in more detail besides the README?

I have tried this configure command line :

 

$ PATH="/usr/linux-k1om-4.7/bin":$PATH

$ ./configure --with-mic --host=x86_64-k1om-linux --with-arch=k1om \

    --with-ffsll --with-walltimer=cycle --with-tls=__thread  \

    --with-virtualtimer=clock_thread_cputime_id \

gsl library optimization error

Hi all,

as a follow up to my previous post on the gsl library coredumping on the Xeon Phi there is good news and bad news.

The good news is: with icc v15 the coredumps are gone

The bad news is: there are other vectorization errors that seem to occur only when -mmic is used.

Consider the following program (distilled from the gsl-1.16 source code):

mixing different icpc version on MIC

Hello,

I am doing the porting of a part (some libraries) of a code to the Xeon Phi architecture. I have some questions concerning compatibility between different ICPC versions.

May be this question should have been posted to the icpc forum ?

To reach the maximum performance, we plan to use icpc V 15.0.0.90 for the Xeon Phi parts of the code.

Finaly, some of the libraries used by the code will come compiled with ICPC V13.0.1 and others (the MIC ones) with icpc 15.0.0.90.

 

My questions are :

How to compile for the Phi from a remote host?

Greetings,

My Phi just got in earlier this week. So far things have gone well and I have been able to answer all my questions from the documentation/forums. However, I have hit a bit of a snag and my searches are not turning up anything of use to solve the problem (just a lot of things that don't work).

Improve Intel MKL Performance for Small Problems: The Use of MKL_DIRECT_CALL

One of the big new features introduced in the Intel MKL 11.2 is the greatly improved performance for small problem sizes. In 11.2, this improvement focuses on xGEMM functions (matrix multiplication). Out of the box, there is already a version-to-version improvement (from Intel MKL 11.1 to Intel MKL 11.2). But on top of it, Intel MKL introduces a new control that can lead to further significant performance boost for small matrices. Users can enable this control when linking with Intel MKL by specifying "-DMKL_DIRECT_CALL" or "-DMKL_DIRECT_CALL_SEQ".

  • Entwickler
  • Professoren
  • Apple OS X*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Unix*
  • Server
  • C/C++
  • Fortran
  • Experten
  • Anfänger
  • Fortgeschrittene
  • Intel® Math Kernel Library
  • small matrix
  • performance
  • Optimierung
  • Server abonnieren