Intel® Direct Ethernet Transport

What If Home | Product Overview | Intel® TM ABI specification | Technical Requirements
FAQ | Primary Technology Contacts | Discussion Forum | Blog

Product Overview

The Intel® Direct Ethernet Transport (Intel® DET) project provides two components for faster message passing on commodity Ethernet fabrics.  The first component is a Linux kernel driver and user-mode library providing RDMA/IPC semantics similar to InfiniBand® and iWARP technologies.  The second component is a 1.2 uDAPL library that provides a standardized interface to 3rd party software such as Intel® MPI.  In addition to providing superior message passing performance compared to a traditional TCP/IP socket stack, Intel® DET provides cluster software developers the opportunity to work with RDMA semantics without investing in a specific RDMA technology.

Features and Benefits

With zero copy transmit, a lightweight protocol, and asynchronous queue pair interfaces, message latencies can be significantly reduced compared to the traditional TCP/IP-socket interface on the same Ethernet fabric.  These benefits can be exploited by cluster application writers and message passing libraries such as Intel® MPI.

  • uDAPL 1.2 compatible provider library
  • Thoroughly tested with Intel® MPI and demonstrating message latency improvement by as much as 30% over the smm/socket device
  • Superior scaling compared to TCP/IP
  • Compatible with any Ethernet device using the Linux net_device interface
  • IEEE registered ethertype and coexistence with TCP/IP over the same Ethernet interface
  • Zero copy transmit, single copy receive
  • Application development support through manual pages and header files

 

Technical Requirements

Intel® DET kernel driver:
  • The kernel driver and user-mode library are delivered as source that is compatible with Linux kernel versions 2.6.9 and above.
  • Build process is tailored for RPM distribution for easy cluster distribution
Intel® DET uDAPL Provider Library
  • 64 bit Linux distribution (x86_64)
  • For this technology preview, the uDAPL provider requires a genuine Intel processor.  Attempts to run on non-Intel processors will result in an error message on the standard error device and application exit.

Frequently Asked Questions

Q: Does the Intel® DET kernel driver and protocol co-exist with TCP/IP over the same Ethernet interface?

A: Yes, Intel® DET interfaces to the standard Linux Ethernet driver interface and uses a registered IEEE “ethertype” allowing the kernel to segregate TCP/IP and Direct Ethernet Transport packets.

Q: Is the Intel® DET protocol routable?

A: No, Intel® DET is a layer 2 protocol intended for small to medium sized clusters connected to a common layer 2 sub-net..

Q: Are there any scaling limits?

A: In theory, no.  In practice, the practical size of a cluster is dependent on the communication patterns and the speed of the fabric.  We  have run workload benchmarks over a 1GigE fabric employing 128 process over 64 nodes.  In these runs, Intel® DET showed superior scaling to TCP/IP.

Q: Will Intel® DET work with any Ethernet NIC?

A: Yes, Intel® DET uses the standard Linux Ethernet driver interface.  However, some NIC drivers support interrupt coalescing that can defer interrupts.  For the best performance, the NIC/driver should be configured to generate an interrupt for each packet received packet.

Q: Is the Intel® DET uDAPL provider compatible with OpenFabric distributions?

A: Yes, although the RPM installation will complain about a conflicting package name. Refer to the release notes for installing the provider in an OpenFabric environment.

Q: Can  I use Intel® DET with message passing libraries?

A: Yes, the Intel® DET uDAPL provider can be used with any application or message passing library that is compatible with uDAPL version 1.2.  We have done extensive testing with Intel® MPI.  Refer to the release notes on how to configure the Intel® MPI environment to run with Intel® DET.

 

Please visit the Intel® Direct Ethernet Transport Forum and share your thoughts.

Primary Technical Contacts

Roy Larsen is a software engineer in the Cluster Software Technology Group.  Since joining Intel in 1988, Roy has worked on networking protocols from OSI to the message passing software of the Intel Paragon supercomputers as well as the management network architecture of the worlds first teraflop computer.  His research interests are in RDMA and direct data placement techniques in clusters environments.

Jerrie Coffman is a software engineer in the Cluster Software and Technology Group.  Jerrie joined Intel in 1982 where his background includes system test, firmware, diagnostics, and device driver development for Intel’s family of supercomputer systems.  In recent years, Jerrie’s research includes the design and implementation of high performance I/O and communication protocols, with emphasis on scalable server technologies.

 

Pour de plus amples informations sur les optimisations de compilation, consultez notre Avertissement concernant les optimisations.