Intel® Many Integrated Core (Intel® MIC) Architecture

GROMACS recipe for symmetric Intel® MPI using PME workloads

Objectives

This package (scripts with instructions) delivers a build and run environment for symmetric Intel® MPI runs. This file is actually the README of the package. Symmetric stands for employing a Xeon® executable and a Xeon Phi™ executable both running together exchanging MPI messages and collective data via Intel MPI.

  • Sviluppatori
  • Partner
  • Studenti
  • Linux*
  • Server
  • C/C++
  • Intermedio
  • Intel® Parallel Studio XE Cluster Edition
  • symmetric MPI
  • native MPI
  • cmake
  • heterogeneous clusters
  • Intel® Many Integrated Core (Intel® MIC) Architecture
  • Message Passing Interface
  • OpenMP*
  • Ricerca
  • Elaborazione basata su cluster
  • Processori Intel® Core™
  • Architettura Intel® Many Integrated Core
  • Ottimizzazione
  • Elaborazione parallela
  • Porting
  • Threading
  • 面向使用 PME 工作负载的对称英特尔® MPI 的 GROMACS 方案

    目标

    该文件包(脚本及其说明)提供了针对对称英特尔运行的构建和运行环境。 该文件实际上是自述 (README) 文件包。 对称指采用至强™ 可执行文件和至强融核™ 可执行文件,两者通过英特尔 MPI 同时运行以传输 MPI 消息和集体数据。

    已有的面向对称英特尔 MPI 的 GROMACS 方案是:https://software.intel.com/zh-cn/articles/gromacs-for-intel-xeon-phi-coprocessor 但该方案强调所谓的 RF 数据集,无法充分利用特定的 Particle Mesh Ewald (PME) 配置选项。

  • Sviluppatori
  • Partner
  • Studenti
  • Linux*
  • Server
  • C/C++
  • Intermedio
  • Intel® Parallel Studio XE Cluster Edition
  • symmetric MPI
  • native MPI
  • cmake
  • heterogeneous clusters
  • Intel® Many Integrated Core (Intel® MIC) Architecture
  • Message Passing Interface
  • OpenMP*
  • Ricerca
  • Elaborazione basata su cluster
  • Processori Intel® Core™
  • Architettura Intel® Many Integrated Core
  • Ottimizzazione
  • Elaborazione parallela
  • Porting
  • Threading
  • Best Known Methods for Setting Locked Memory Size

    If you use the Direct Access Programming Library (DAPL) fabric when running your Message Passing Interface (MPI) application, and your application fails with an error message like this:

    ………………..
    [0] MPI startup(): RLIMIT_MEMLOCK too small
    [0] MPI startup(): RLIMIT_MEMLOCK too small
    libibverbs: Warning: RLIMIT_MEMLOCK is 0 bytes.
    ………………..

    Improving MPI Communication between the Intel® Xeon® Host and Intel® Xeon Phi™

    MPI Symmetric Mode is widely used in systems equipped with Intel® Xeon Phi™ coprocessors. In a system where one or more coprocessors are installed on an Intel® Xeon® host, Transmission Control Protocol (TCP) is used for MPI messages sent between the host and coprocessors or between coprocessors on that same host.  For some critical applications this MPI communication may not be fast enough.

    Compiling, Configuring and running Lustre on Intel® Xeon Phi™ Coprocessor

    This article enumerates the recommended steps to enable Lustre on the Intel® Xeon Phi™ coprocessor. The steps in this document are validated against the Intel® Manycore Platform Software Stack (Intel® MPSS) versions 3.3.x and 3.4.x and with Lustre versions 2.5.x and 2.6.0.

    Intel® MPSS - changes in release cadence and support

    Up until now, Intel has been releasing its Manycore Platform Software Stack (Intel® MPSS) on a quarterly cadence, with each release being supported for 1 year from the date it was issued.

    Beginning October 2014, the release timing and the support lifetime of Intel® MPSS is changing, namely to support divergent community needs:

    Working with Mellanox* InfiniBand Adapter on System with Intel® Xeon Phi™ Coprocessors

    InfiniBand* is a network communications protocol commonly used in the HPC area because the protocol offers very high throughput. Intel and Mellanox* are among the most popular InfiniBand* adapter manufacturers. In this blog, I will share my experience of installing and testing Mellanox* InfiniBand* adapter cards with three different versions of OFED* (Open Fabrics Enterprise Distribution), OpenFabrics OFED-1.5.4.1, OpenFabrics OFED-3.5.2-mic and Mellanox* OFED 2.1, on systems containing Intel® Xeon Phi™ coprocessors.

    在采用英特尔® 至强融核™ 协处理器的系统上配置 Mellanox* InfiniBand 适配器

    InfiniBand* 是 HPC 区域中经常使用的网络通信协议,因为该协议可以提供非常高的吞吐量。 英特尔和 Mellanox* 是最受欢迎的 InfiniBand* 适配器生产商。 在本博客中,我将介绍如何使用三个版本的 OFED* (Open Fabrics Enterprise Distribution) — OpenFabrics OFED-1.5.4.1、OpenFabrics OFED-3.5.2-mic 和 Mellanox* OFED 2.1 在采用英特尔® 至强融核™ 协处理器的系统上安装和测试 Mellanox* InfiniBand* 适配器卡。

    为了允许协处理器上的原生应用与 Mellanox* InfiniBand 适配器进行通信,必须启用协处理器通信链路 (CCL)。 当使用 Mellanox* InfiniBand 适配器时,上面提到的三种 OFED 堆栈支持 CCL。

    1. 硬件安装

    使用两种系统,每种系统配备一台英特尔® 至强™ E5-2670 2.60 GHz 处理器和两台英特尔® 至强融核™ 协处理器。 两个系统都运行 RHEL 6.3。 它们使用千兆以太网适配器并通过千兆以太网路由器进行连接。

    Prominent features of the Intel® Manycore Platform Software Stack (Intel® MPSS) version 3.3

    Intel MPSS 3.3 release features

    The Intel® Manycore Platform Software Stack (Intel® MPSS) version 3.3 was released on 14 July, 2014. This page lists the prominent features in this release.

  • Sviluppatori
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8.x
  • Server
  • Principiante
  • Intermedio
  • Intel® Many Integrated Core (Intel® MIC) Architecture
  • Intel® MPSS
  • Iscriversi a Intel® Many Integrated Core (Intel® MIC) Architecture