Intel® Many Integrated Core (Intel® MIC) Architecture

Best Known Methods for Setting Locked Memory Size

If you use the Direct Access Programming Library (DAPL) fabric when running your Message Passing Interface (MPI) application, and your application fails with an error message like this:

[0] MPI startup(): RLIMIT_MEMLOCK too small
[0] MPI startup(): RLIMIT_MEMLOCK too small
libibverbs: Warning: RLIMIT_MEMLOCK is 0 bytes.

在采用英特尔® 至强融核™ 协处理器的系统上配置 Mellanox* InfiniBand 适配器

InfiniBand* 是 HPC 区域中经常使用的网络通信协议,因为该协议可以提供非常高的吞吐量。 英特尔和 Mellanox* 是最受欢迎的 InfiniBand* 适配器生产商。 在本博客中,我将介绍如何使用三个版本的 OFED* (Open Fabrics Enterprise Distribution) — OpenFabrics OFED-、OpenFabrics OFED-3.5.2-mic 和 Mellanox* OFED 2.1 在采用英特尔® 至强融核™ 协处理器的系统上安装和测试 Mellanox* InfiniBand* 适配器卡。

为了允许协处理器上的原生应用与 Mellanox* InfiniBand 适配器进行通信,必须启用协处理器通信链路 (CCL)。 当使用 Mellanox* InfiniBand 适配器时,上面提到的三种 OFED 堆栈支持 CCL。

1. 硬件安装

使用两种系统,每种系统配备一台英特尔® 至强™ E5-2670 2.60 GHz 处理器和两台英特尔® 至强融核™ 协处理器。 两个系统都运行 RHEL 6.3。 它们使用千兆以太网适配器并通过千兆以太网路由器进行连接。

Improving MPI Communication between the Intel® Xeon® Host and Intel® Xeon Phi™

MPI Symmetric Mode is widely used in systems equipped with Intel® Xeon Phi™ coprocessors. In a system where one or more coprocessors are installed on an Intel® Xeon® host, Transmission Control Protocol (TCP) is used for MPI messages sent between the host and coprocessors or between coprocessors on that same host.  For some critical applications this MPI communication may not be fast enough.

Compiling, Configuring and running Lustre on Intel® Xeon Phi™ Coprocessor

This article enumerates the recommended steps to enable Lustre on the Intel® Xeon Phi™ coprocessor. The steps in this document are validated against the Intel® Manycore Platform Software Stack (Intel® MPSS) versions 3.3.x and 3.4.x and with Lustre versions 2.5.x and 2.6.0.

Intel® MPSS - changes in release cadence and support

Up until now, Intel has been releasing its Manycore Platform Software Stack (Intel® MPSS) on a quarterly cadence, with each release being supported for 1 year from the date it was issued.

Beginning October 2014, the release timing and the support lifetime of Intel® MPSS is changing, namely to support divergent community needs:

Working with Mellanox* InfiniBand Adapter on System with Intel® Xeon Phi™ Coprocessors

InfiniBand* is a network communications protocol commonly used in the HPC area because the protocol offers very high throughput. Intel and Mellanox* are among the most popular InfiniBand* adapter manufacturers. In this blog, I will share my experience of installing and testing Mellanox* InfiniBand* adapter cards with three different versions of OFED* (Open Fabrics Enterprise Distribution), OpenFabrics OFED-, OpenFabrics OFED-3.5.2-mic and Mellanox* OFED 2.1, on systems containing Intel® Xeon Phi™ coprocessors.

Prominent features of the Intel® Manycore Platform Software Stack (Intel® MPSS) version 3.3

Intel MPSS 3.3 release features

The Intel® Manycore Platform Software Stack (Intel® MPSS) version 3.3 was released on 14 July, 2014. This page lists the prominent features in this release.

  • Développeurs
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Serveur
  • Débutant
  • Intermédiaire
  • Intel® Many Integrated Core (Intel® MIC) Architecture
  • Intel® MPSS
  • Seven-Part Intel® Xeon Phi™ Coprocessor Compiler Video Series

    Just Released!


    This two-day webinar series introduces you to the world of multicore and manycore computing with Intel® Xeon® processors and Intel® Xeon Phi™ coprocessors. Expert technical teams at Intel discuss development tools, programming models, vectorization, and execution models that will get your development efforts powered up to get the best out of your applications and platforms.


    S’abonner à Intel® Many Integrated Core (Intel® MIC) Architecture