User Space Networking Fuels NFV Performance

It is an exciting time to be a software developer in the networking space and as the roles of engineer are changing so too are the rules.

For 15 years, the traditional thinking behind high performance networking has been to take all the packet processing functionality and push as much as possible into the kernel.    That model has been changing as the cost of always crossing the divide between kernel and user space, context switching on interrupts to service packets, and copying the data has limited performance of packet processing applications.

Many of these lessons have been discussed and implemented by projects like mTCP.  The techniques from mTCP project have since been adopted by other similar projects and include changing expensive system calls to shared memory access that can be used by trusted threads within the same CPU core, efficient flow-level event aggregation, and the use of batch packet processing to achieve higher I/O efficiencies.

Using these principals, mTCP claims to improve the performance of various popular applications by 33% (SSLShader) to 320% (lighttpd) as compared with the native Linux stack.

Scaling for performance on multicore systems has led to new approaches in architecting network software.  For example, all the functionality and processing the kernel has been doing including the network drivers is now being placed directly into the user space application and the application now is assuming direct control of NUMA, core affinity, and parallelism.   The result of having all of the kernel and user space network processing in the same context of execution keeps the cache fresh and avoids the latency penalty of other designs. These high performance user space network stacks dramatically reduce latency and CPU utilization while increasing message rate and bandwidth.  .Additionally, a run-to-completion model can be replicated across available cores to independently process similar workloads..    

With this evolution in network development, we are now seeing a flurry of open source projects that are creating full network stacks in user space.  Some stacks, like mTCP, are created from the ground up while other projects just port FreeBSD’s network stack mainly due to its’ reputation as being a more robust network stack.  Moreover, storage solutions have in large part adopted FreeBSD as their core operating system.   But, Linux enthusiast also have user space options.

At the heart of the rush to user space, these stacks are using DPDK to create an interrupt free run-to-completion model for packet processing and adding additional performance improvements by mapping the NIC packet buffers directly to user space.  In turn, DPDK leverages the features available in these network stack to manage TCP when the interfaces are unbound from the kernel.

DPDK also enables a few notable vSwitch accelerators.  These vSwitches include full implementations of Openflow 1.3 and some integrate with Openstack Neutron. 

Below, I have gathered some of the open source projects I found.   Regardless if you decide to use a vSwitch or a full network stack, network developers have a lot of options to bring their applications to user space to scale performance on multi-core systems.

DPDK-Enabled vSwitch

Project

Description

Getting Started

DPDK

Openflow

Openstack

Lagopus

Lagopus vSwitch that provides high-performance packet processing

https://github.com/lagopus/lagopus/blob/master/QUICKSTART.md

 

1.8.0

1.3

 

OVS

Open vSwitch is a multilayer software switch platform that supports standard management interfaces and opens the forwarding functions to programmatic extension and control

https://github.com/openvswitch/ovs/blob/master/README.md

 

2.0.0

1.3

Icehouse

Snabb

Snabb Switch is a simple and fast packet networking toolkit

https://github.com/SnabbCo/snabbswitch/blob/master/README.md

 

1.7.1

 

Proposed

xDPd

The eXtensible DataPath daemon (xDPd) is a multi-platform, multi OF version, open-source datapath built focusing on performance and extensibility

https://github.com/bisdn/xdpd/blob/stable/README

 

1.7.1

1.3

 

 

User Space Network Stacks Developed From Scratch

Project

Description

Getting Started

DPDK

mTCP

A Highly Scalable User-level TCP Stack for Multicore Systems

https://github.com/eunyoung14/mtcp/blob/master/README

 

2.0.0

Mirage 

OCAML TCP/IP Stack for User Space

https://github.com/mirage/mirage-tcpip

 

 

lwIP 

Tiny TCP/IP implementation to reduced RAM footprint

http://git.savannah.gnu.org/cgit/lwip.git/tree/README

 

 

 

User Space Network Stacks Forked or Ported

Project

Description

Getting Started

Derived From

DPDK

Arrakis

User space OS for multi-core systems.

https://github.com/UWNetworksLab/arrakis/blob/master/README_ARRAKIS

iwIP

 

libuinet 

This is a user-space port of the FreeBSD TCP/IP stack

https://github.com/pkelsey/libuinet/blob/master/README

 

FreeBSD

 

NUSE (libos)

A library operating system version of Linux kernel

https://github.com/libos-nuse/net-next-nuse/wiki/Quick-Start

 

Linux

1.8.0

OpenDP

Open data plane on DPDK TCP/IP stack for DPDK

https://github.com/opendp/dpdk-odp/wiki

FreeBSD

2.0.0

OpenOnload

is a high performance user-level network stack

http://www.openonload.org/download/openonload-201205-README.txt

iwIP

 

OSv

OSv is a new open-source operating system for virtual-machines

https://github.com/cloudius-systems/osv/blob/master/README.md

FreeBSD

 

Sandstorm

Sandstorm is an open source platform for personal servers

https://github.com/sandstorm-io/sandstorm/blob/master/README.md

FreeBSD

 

Для получения подробной информации о возможностях оптимизации компилятора обратитесь к нашему Уведомлению об оптимизации.
Возможность комментирования русскоязычного контента была отключена. Узнать подробнее.