Parallel Computing

userdel bug?

Hi,

In MPSS 1-2.1.4982-15-rhel-6.3,I added 2 users named "hpc" and "hpc2" using micctrl as:

/usr/sbin/micctrl --useradd=hpc mic0
/usr/sbin/micctrl --useradd=hpc2 mic0

Then I deleted only "hpc" as:

/usr/sbin/micctrl --userdel=hpc mic0

After this, I got 2 lines

mic0: User hpc removed
mic0: User hpc removed

and both "hpc" and "hpc2" are removed!

Data Parallel in Phi for CUDA programmers

     Can someone please clarify what data parallel programming means in the context of a Phi? The little bit of literature I’ve seen makes much of pragma launching code up onto one of the 240 available threads, which sounds like task parallel. But with my blinkered CUDA mindset, I wonder how to do pure data parallel. You know, manipulating data to work across a large number of available threads. Is Phi meant to do the same thing, but across those 240 threads?

Call stacks: "Result directory does not contain data applicable to this report."

I'm using the command line interface of Amplifier XE of Parallel Studio XE 2013 under Windows 7 x86_64 with a Xeon W3530 CPU. The problem is I cannot get reports with call stacks when hardware events are used.

amplxe-cl -version Intel(R) VTune(TM) Amplifier XE 2013 Update 4 (build 270817) Command Line Tool Copyright (C) 2009-2013 Intel Corporation. All rights reserved.

When I try to do the following, it complains of missing data and I get no report:

Are shared libraries with offload code "fat" libraries?

When compiling a C++ program with a section of code that is "#pragma offload target(mic) {...some code...}", it appears that the compiled code for "some code" is automatically transfered to the co-processor and run there. This is what I find in the documentation, and it works as expected. Apparently, the resulting binary contains the code for both host and co-processor, being a "fat binary" http://en.wikipedia.org/wiki/Fat_binary

Introduction to the Intel MKL Extended Eigensolver

Intel® MKL 11.0 Update 2 introduced a new component called Extended Eigensolver routines. These routines solve standard and generalized Eigenvalue problems for symmetric/Hermitian and symmetric/Hermitian positive definite sparse matrices. Specifically, these routines computes all the Eigenvalues and the corresponding Eigenvectors within a given search interval [λmin, λmax]:

  • Developers
  • Partners
  • Professors
  • Students
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Apple OS X*
  • Unix*
  • Business Client
  • Server
  • C/C++
  • Fortran
  • Advanced
  • Beginner
  • Intermediate
  • Intel® Math Kernel Library
  • Learning Lab
  • MKL
  • Eigensolver
  • eigenvectors
  • Eigenvalues
  • sparse matrix
  • intel math kernal library
  • intel mkl
  • Development Tools
  • Enterprise
  • Financial Services Industry
  • Intel® Core™ Processors
  • Intel® Many Integrated Core Architecture
  • Parallel Computing
  • Subscribe to Parallel Computing