Hey i would like to view the version of MPSS installed in my machine. I ran the following command...

Hey i would like to view the version of MPSS installed in my machine. I ran the following command 

# /opt/intel/mic/bin/micinfo

However, My output says MPSS Version: Not Available . I am able to start and stop the mpss stack by the following commands, and hence i am sure MPSS is installed. 

$ sudo service mpss start/stop

I have attached the screenshot of my output for /opt/mic/bin/micinfo for your reference. Kindly suggest on how i can find the version. 

unable to compile program

use omp_lib
program openmp_mic
 !dir$ offload begin target (mic)  
     !$omp parallel 
      print *,'The number of threads is ',threadcount
      print *,'hello from thread no ',omp_get_num_thread(),'of',threadcount
     !$omp end parallel
 !dir$ end offload
end program openmp_mic

Here is my sample program

i got the following error on compilation.

icc openmp_mic.f90

error #6785: This name does not match the unit name.   [OPENMP_MIC]

end program openmp_mic

Xeon Phi - can' t offload an unordered_map

I'm trying to run my code parallel in both CPU and MIC. When I run only the CPU code, everything is fine, but when I try to offload a few variables, I can't even compile my program. Here is the code:

*My error message is:

            error: no operator "!" matches these operands
            operand types are: ! std::unordered_map<int, int, std::hash<int>, std::equal_to<int>, std::allocator<std::pair<const int, int>>>
          if (!rep[0])

std::unordered_map<int, int> repetido;

Viable configuration for a home lab - 8 31S1P on 4 slot pcie 3.0 motherboard?

 Building my basement laboratory for math, machine learning, parallel programming, kagglng...

Is it electrically possible to mount 2 cards per PCIE 3.0 x16 slot with daughter boards and extenders on a motherboard.

At 300 watts per card tdp  it would likely take - 3 1000 watt power supplies BUT can we get enough power to the individual MIC's in this kind of configuration?

Being able to spend on compute cards rather than infiniband switches/cards and platforms seems like a better way to spend the allowance my wife lets me keep if possible.



Intel MPSS on Ubuntu and Mellanox OFED 2.3


We recently installed a PHI on a Dell R720 server running Ubuntu 14.04 with Mellanox OFED 2.3

From at section 2.3, there is instructions to install Mellanox OFED 2.1 to support host IB adapter. I am running Mellanox OFED 2.3.1 and can't take it down to 2.1. The question is: Is it possible to work with Mellanox OFED 2.3.1 on the PHI?

phi access

Hi all,

I am currently preparing an introductory course for HPC for some phd students at our university. We will work on some very simple example codes (openmp and mpi) and test them on small clusters we have here. Is there any possibility to have access to a phi so I could try to run some of this code on it to see how it behaves/scales? 

Kind regards!

streaming video thru Phi

I am currently developing a real-time video processing application that runs on a dedicated 2-CPU Xeon linux box.  The application supports multiple video inputs and multiple video outputs with standard image processing like picture-in-a-picture, graphics and language-specific text overlay, etc. It is basically a pipeline-based architecture where a given input video stream is over laid with language-specific text overlays, then each language specific stream is output on a separate output.

Configuring the Apache Web server to use RDRAND in SSL sessions

Starting with the 1.0.2 release of OpenSSL*, RDRAND has been temporarily removed as a random number source. Future releases of OpenSSL will re-incorporate RDRAND, but will employ cryptographic mixing with OpenSSL's own software-based PRNG. While OpenSSL's random numbers will benefit form the quality of RDRAND, it will not have the same performance as RDRAND alone.

Subscribe to Server