Optimization

VTune installation problem

HI, when I'm installing VTune 2013 update 17, at the last step, an error log appeared shown below:

Warning:  no sep3_15 driver was found loaded in the kernel.
Checking for PMU arbitration service (PAX) ... not detected.
Attempting to start PAX service ...
Executing: insmod ./pax/pax-x32_64-3.13.0-32-genericsmp.ko
insmod: error inserting './pax/pax-x32_64-3.13.0-32-genericsmp.ko': -1 Unknown symbol in module

Error:  pax driver failed to load!

amplxe-runss.py looking for 32bit collector; should use 64 bit instead

My host system is 64 bit, my target system (xeon phi) is also 64 bit. When I run

export AMPLXE_TARGET_PRODUCT_DIR="/amplxe"
amplxe-runss.py [...]

It gives me back "/amplxe/bin32/amplxe-runss: No such file or directory" This is expected since in /amplxe/ there is only bin64 (and lib64, message).

Starting from Scratch: How VirtualDJ* 8 mixes Music and Technology

Abstract

Atomix Productions video and music mixer, VirtualDJ* 8,  was rewritten from scratch with an emphasis on a flexible user interface. These enhancements include multi-point touch support, use of a second screen for videos, and a transforming interface for different device form factors.;

  • Developers
  • Partners
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • UX
  • Windows*
  • VirtualDJ
  • touch
  • Dual Screen
  • 2-in-1
  • video
  • audio
  • editing
  • Dual Screen
  • Graphics
  • Media Processing
  • Mobility
  • Optimization
  • Touch Interfaces
  • User Experience and Design
  • AVX2 intrinsic _mm256_blendv_epi8 documentation lists incorrect type for mask parameter

    In the User and Reference Guide for the Intel® C++ Compiler 15.0; the prototype for _mm256_blendv_epi8 is listed as having a const int mask argument:

    https://software.intel.com/en-us/node/523908

    extern __m256i _mm256_blendv_epi8(__m256i s1, __m256i s2, const int mask);

    However, the prototype in immintrin.h lists the mask parameter as being of type __mm256i:

    Performance issue of scalable_realloc() vs. glibc realloc()

    Hi,

    I have an application that initially allocates a 4MB block of memory and linearly grows this block using 4MB steps. glibc's realloc() handles this pattern quite well. Linking against tbbmalloc_proxy (or replacing glibc realloc() with scalable_realloc()) degrades performance very badly.

    Subscribe to Optimization