Cluster Computing

Oracle VirtualBox

Hi,

I'm thinking of migrating to Linux Fortran native from Windows Fortran offload as all the offload lines are getting cumbersome.  Also expect to migrate to Knights Landing standalone.  Does anyone have any experience working inside Oracle VirtualBox?  Is there any particular Linux that seems to work best?

thanks

What's Best Solution of Data Transferring between Host and Mic?

Hi, All.

My team is currently trying to speed up parallel database processing using Xeon Phi Coprocessor. We are focusing on SCIF. After some experiments, we've decided to take advantage of SCIF RMA Read/Write instead of SCIF Messaging layer - socket like technique. I also know SCIF Mapped Remote Memory, but i don't know whether to choose these two methods - SCIF RMA or SCIF Mapped Remote Memory? Or is there any other technique suitable for data transferring between Host and Mic? Any experiment that can actually help me to decide?

Thanks a lot :)

Bryan

 

use VM _Cilk_shared_malloc how much memory can I apply

when i use Virtual Memory to use MIC PHI, but when I use _Cilk_shared_malloc to alloc 256MB memory but it doesn't work. the error tell me i alloc memory error. but I use it to alloc 128MB it can work. so i want to konw. if _Cilk_shared_malloc has some restrict when i alloc memory. if is it i can't alloc more than 256MB memory??

SIGSEV error while adding SIMD instruction

Hi all I rewrote a program for exploiting SIMD.

However I seem to be encountering a problem of segmentation fault. How do I find whats wrong?

Attached are the files program and test program with SIMD that fail's.

while test.F90 is just loop.F90 with minor modifications for some reason it fails. and I dont know why.

I use the intel 15 set of compilers

Any help? 

dapl with MPSS 3.5 and Qlogic HCA

I am running MPSS 3.5 and OFED+ 7.3.1.0.12 and have 2 nodes with Phi cards and Qlogic. I believe that if I set I_MPI_FABRICS to use  either tcp or tmi everything works, but I've heard that dapl is faster and I'm having problems getting that to work everywhere. It works when MPI tasks are either only on the hosts or only on cards in a single host. If there are tasks on the host and a card it appears to have problems connecting to the IP address that is added during the ofed-mic service startup (192.0.2.0/24).

GROMACS recipe for symmetric Intel® MPI using PME workloads

Objectives

This package (scripts with instructions) delivers a build and run environment for symmetric Intel® MPI runs. This file is actually the README of the package. Symmetric stands for employing a Xeon® executable and a Xeon Phi™ executable both running together exchanging MPI messages and collective data via Intel MPI.

  • Developers
  • Partners
  • Students
  • Linux*
  • Server
  • C/C++
  • Intermediate
  • Intel® Parallel Studio XE Cluster Edition
  • symmetric MPI
  • native MPI
  • cmake
  • heterogeneous clusters
  • Intel® Many Integrated Core (Intel® MIC) Architecture
  • Message Passing Interface
  • OpenMP*
  • Academic
  • Cluster Computing
  • Intel® Core™ Processors
  • Intel® Many Integrated Core Architecture
  • Optimization
  • Parallel Computing
  • Porting
  • Threading
  • Subscribe to Cluster Computing