This article provides instructions for code access, build, and run directions for the miniGhost code, running on Intel® Xeon® processors and Intel® Xeon Phi™ Coprocessors.
miniGhost is a Finite Difference mini-application which implements a difference stencil across a homogenous three dimensional domain.
The kernels that it contains are:
- computation of stencil options,
- inter-process boundary (halo, ghost) exchange.
- Global summation of grid values.
There is a known compatibility problem between Hydra and versions 7.2 and 7.3 of the GNU* Debugger (gdb*). When running Hydra under one of these versions, Hydra will hang.
Please use a different version of gdb in order to run Hydra under gdb.
This has only been seen in Linux* at this time.
In this continuation of the blog, Hybrid MPI and OpenMP* Model, I will discuss the use of MPI one-sided communication and demonstrate running a one-sided application in symmetric mode on an Intel® Xeon® host and two coprocessors connected via PCIe.
The Intel® Cluster Ready architecture specification version 1.3.1 has officially released as of July 2014. This is a minor update from version 1.3 with most of the changes between the versions are related to the following:
- removal of library or tool requirements based on analysis of Intel® Cluster Ready registered applications
- updated/refreshed required versions of key libraries and tools
Details of the updates to the architecture requirements:
4.2 Base Software Requirements
The following table lists all supported versions of the Intel® MPI Library and the Intel® Manycore Platform Software Stack (MPSS). Use this as a reference on the cross-compatibility between the library, MPSS, and all supported fabrics.
In the High Performance Computing (HPC) area, parallel computing techniques such as MPI, OpenMP*, one-sided communications, shmem, and Fortran coarray are widely utilized. This blog is part of a series that will introduce the use of these techniques, especially how to use them on the Intel® Xeon Phi™ coprocessor. This first blog discusses the main usage of the hybrid MPI/OpenMP model.