ofed

在采用英特尔® 至强融核™ 协处理器的系统上配置 Mellanox* InfiniBand 适配器

InfiniBand* 是 HPC 区域中经常使用的网络通信协议,因为该协议可以提供非常高的吞吐量。 英特尔和 Mellanox* 是最受欢迎的 InfiniBand* 适配器生产商。 在本博客中,我将介绍如何使用三个版本的 OFED* (Open Fabrics Enterprise Distribution) — OpenFabrics OFED-1.5.4.1、OpenFabrics OFED-3.5.2-mic 和 Mellanox* OFED 2.1 在采用英特尔® 至强融核™ 协处理器的系统上安装和测试 Mellanox* InfiniBand* 适配器卡。

为了允许协处理器上的原生应用与 Mellanox* InfiniBand 适配器进行通信,必须启用协处理器通信链路 (CCL)。 当使用 Mellanox* InfiniBand 适配器时,上面提到的三种 OFED 堆栈支持 CCL。

1. 硬件安装

使用两种系统,每种系统配备一台英特尔® 至强™ E5-2670 2.60 GHz 处理器和两台英特尔® 至强融核™ 协处理器。 两个系统都运行 RHEL 6.3。 它们使用千兆以太网适配器并通过千兆以太网路由器进行连接。

Compiling, Configuring and running Lustre on Intel® Xeon Phi™ Coprocessor

This article enumerates the recommended steps to enable Lustre on the Intel® Xeon Phi™ coprocessor. The steps in this document are validated against the Intel® Manycore Platform Software Stack (Intel® MPSS) versions 3.3.x and 3.4.x and with Lustre versions 2.5.x and 2.6.0.

Working with Mellanox* InfiniBand Adapter on System with Intel® Xeon Phi™ Coprocessors

InfiniBand* is a network communications protocol commonly used in the HPC area because the protocol offers very high throughput. Intel and Mellanox* are among the most popular InfiniBand* adapter manufacturers. In this blog, I will share my experience of installing and testing Mellanox* InfiniBand* adapter cards with three different versions of OFED* (Open Fabrics Enterprise Distribution), OpenFabrics OFED-1.5.4.1, OpenFabrics OFED-3.5.2-mic and Mellanox* OFED 2.1, on systems containing Intel® Xeon Phi™ coprocessors.

Intel® Cluster Ready Architecture Specification version 1.3.1 Summary

The Intel® Cluster Ready architecture specification version 1.3.1 has officially released as of July 2014.  This is a minor update from version 1.3 with most of the changes between the versions are related to the following:

  • removal of library or tool requirements based on analysis of Intel® Cluster Ready registered applications
  • updated/refreshed required versions of key libraries and tools

Details of the updates to the architecture requirements:

4.2 Base Software Requirements

  • Développeurs
  • Partenaires
  • Étudiants
  • Linux*
  • Avancé
  • Débutant
  • Intermédiaire
  • Intel® Cluster Ready
  • Intel Cluster Ready
  • HPC
  • cluster administration
  • cluster tools
  • MPI
  • ofed
  • Troubleshooting OFED* for Use with the Intel® MPSS

    This is a list of what are hopefully helpful hints for troubleshooting problems with OFED* and InfiniBand* when used with the Intel® Xeon Phi™ coprocessor.

    1) Finding Documentation

    The primary sources for information on OFED for the Intel Xeon Phi coprocessor are:

    Symmetric Mode MPI Performance without InfiniBand*

    Symptom

    Slow host-coprocessor MPI communications in systems with no InfiniBand* HCA.  If running with I_MPI_DEBUG=2 or higher, you will see one of the following messages indicating that the TCP fabric has been selected:

    [0] MPI startup(): tcp data transfer mode
    [0] MPI startup(): shm and tcp data transfer modes
  • Linux*
  • Serveur
  • Intermédiaire
  • Bibliothèque Intel® MPI Library
  • tips and tricks
  • Configuring Intel® Xeon Phi™ coprocessors inside a cluster
  • Intel® MPSS
  • ofed
  • Interface de transmission de messages
  • Intel® Many Integrated Core Architecture
  • Understanding the InfiniBand Subnet Manager

    The InfiniBand subnet manager (OpenSM) assigns Local IDentifiers (LIDs) to each port connected to the InfiniBand fabric, and develops a routing table based off of the assigned LIDs.

    There are two types of subnet managers, software based and hardware based. Hardware based subnet managers are typically part of the firmware of the attached InfiniBand switch. A software subnet manager is not necessary if a hardware based subnet manager is active.

  • Développeurs
  • Linux*
  • InfiniBand
  • Subnet Manager
  • OpenSM
  • ofed
  • Intel® Cluster Ready
  • S’abonner à ofed