A Mission-Critical Big Data Platform for the Real-Time Enterprise

As the volume and velocity of enterprise data continue to grow, extracting high-value insight is becoming more challenging and more important. Businesses that can analyze fresh operational data instantly—without the delays of traditional data warehouses and data marts—can make the right decisions faster to deliver better outcomes.
  • Sviluppatori
  • Server
  • Java*
  • JVM
  • Intel® Xeon® processor E7 v3
  • NRI
  • Intel® AVX2
  • OLTP
  • NUMA
  • Intel® Advanced Vector Extensions
  • Big data
  • Vettorizzazione
  • Free Parallel Programming Training

     Intel is offering FREE online training, in collaboration with Colfax.

    The upcoming training will start September 9th and it includes free 3-week remote access to a Intel® Xeon® and Intel® Xeon Phi™ server.

    Another series of training will start October 13th.

    For more details and registration, please check:

    You can also check the training page for more training options:

    Xeon Phi and offload from MATLAB MEX file


    I am having a really hard time figuring out how to use the Xeon Phi offload mode from within MATLAB MEX files under Linux. I have managed to force MATLAB to use icc for compilation and verified that the mex files run fine. The problems start when using the offload pragma - as far as I can tell, nobody has tried that yet and I suspect this is some (fixable?) issue with libraries. Can someone here help me with this?

    Consider the following simple code

    How to allocation MICs to all the MPI processors equally for AO?

    Could you please take a look at this problem? My machine has 16 CPUs and 4 MICs (47 coprocessors each), and I run my program with 8 MPI processors (mpi_comm_size = 8) and want to use MKL routines with automatic offload (AO) mode. As you can see in the test code attached, I tried three different methods.
    METHOD-1: I allocate the 4 MICs to the first 4 CPUs each and let the other CPUs run w/o MIC. In this case the program works well as expected and I got the following performance test result when solving zgemm for 5k*5k size of complex & dense matrices.

    Iscriversi a Server