Data Plane Development Kit: Get Started


Packet processing in the fast path involves looking up bit patterns and deciding at line rate on an action based on the bit patterns present. These bit patterns could belong to one of multiple headers present in a packet, which could reside in one of multiple layers, for example, the Ethernet, VLAN, IP, MPLS, or TCP/UDP. The actions determined by these bit patterns can range from simply forwarding them to another port to doing complex rewrites that involve mapping a packet's header from one set of protocols to another. Add to this the traffic management and policing functions, firewalls, VPNs, and so on, and the complexity of operations to be performed per packet explodes.

To put the expectation from the fast path in perspective, at a line rate of 10 Gig and packet size of 84 bytes, a processor has to process 14.88 million packets per second. General-purpose hardware was not fast enough to process packets at this rate. Hence ASICs and NPUs have been performing the data path processing in most production network systems. The obvious problem with this approach is inflexibility, cost, long development cycles, and dependency on a particular vendor. However with the availability of faster and cheaper CPUs and software accelerations such as the Data Plane Development Kit (DPDK), it is possible to move these functions onto commodity hardware.

What is the Data Plane Development Kit?

The DPDK is a set of libraries and drivers for fast packet processing. You can convert a general-purpose processor into your own packet forwarder without having to use expensive custom switches and routers.

The DPDK runs mostly in Linux* user-land, though a FreeBSD* port is available for a subset of DPDK features. DPDK is an open source BSD licensed project. The most recent patches and enhancements, provided by the community, are available in the master branch.

DPDK is not a networking stack and does not provide functions such as Layer-3 forwarding, IPsec, firewalling, and so on. Within the tree, however, various application examples are included to help develop such features.

Some support and services are provided by several companies, including Intel.

The DPDK can:

  • Receive and send packets within the minimum number of CPU cycles (usually less than 80 cycles)
  • Develop fast packet capture algorithms (tcpdump-like)
  • Run third-party fast path stacks

Some packet processing functions have been benchmarked up to hundreds of millions of frames per second, using 64-byte packets with a PCIe* NIC.

Using the Data Plane Development Kit

To get started with the DPDK, follow these steps:

  1. If you do not have Linux, download the virtual box and install a Linux machine.
  2. Download the latest DPDK using one of the commands depending on the Linux Kernel you have.
    sudo apt-get
    yum install
  3. Untar the DPDK zip file.
    tar zxvf dpdk-2.1.0.tar.gz
  4. Explore the source code.
    cd dpdk-2.1.0
    To look at what is in the directories, see the video Chapter 1: DPDK Directory Structure and Scripts and Configuring DPDK in the Network Builder University’s DPDK Intro course.

  5. Check the CPU configurations.
    cd tools

    Sample Output
  6. Check the NIC configurations.

    stack@nde01 tools]$ ./ --status
    Network devices using DPDK-compatible driver
    0000:03:00.0 'Ethernet Connection X552/X557-AT 10GBASE-T' drv=igb_uio unused=
    0000:03:00.1 'Ethernet Connection X552/X557-AT 10GBASE-T' drv=igb_uio unused=

    Network devices using kernel driver
    0000:05:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' if=ens2f0 drv=i40e unused=igb_uio
    0000:05:00.1 'Ethernet Controller XL710 for 40GbE QSFP+' if=ens2f1 drv=i40e unused=igb_uio
    0000:07:00.0 'I350 Gigabit Network Connection' if=eno1 drv=igb unused=igb_uio *Active*
    0000:07:00.1 'I350 Gigabit Network Connection' if=eno2 drv=igb unused=igb_uio

    Other network devices
  7. Set up the DPDK: is a useful utility that guides you through the process of compiling DPDK and configuring your system. To run it you need to be the root, and from the tools directory simply type:


    The sample output looks like this:

    [stack@nde01 tools]$ ./
    RTE_SDK exported as /admin/software_installfiles/dpdk-2.1.0
    Step 1: Select the DPDK environment to build
    [1] i686-native-linuxapp-gcc
    [2] i686-native-linuxapp-icc
    [3] ppc_64-power8-linuxapp-gcc
    [4] tile-tilegx-linuxapp-gcc
    [5] x86_64-ivshmem-linuxapp-gcc
    [6] x86_64-ivshmem-linuxapp-icc
    [7] x86_64-native-bsdapp-clang
    [8] x86_64-native-bsdapp-gcc
    [9] x86_64-native-linuxapp-clang
    [10] x86_64-native-linuxapp-gcc
    [11] x86_64-native-linuxapp-icc
    [12] x86_x32-native-linuxapp-gcc

    Step 2: Setup linuxapp environment
    [13] Insert IGB UIO module
    [14] Insert VFIO module
    [15] Insert KNI module
    [16] Setup hugepage mappings for non-NUMA systems
    [17] Setup hugepage mappings for NUMA systems
    [18] Display current Ethernet device settings
    [19] Bind Ethernet device to IGB UIO module
    [20] Bind Ethernet device to VFIO module
    [21] Setup VFIO permissions

    Step 3: Run test application for linuxapp environment
    [22] Run test application ($RTE_TARGET/app/test)
    [23] Run testpmd application in interactive mode ($RTE_TARGET/app/testpmd)

    Step 4: Other tools
    [24] List hugepage info from /proc/meminfo
    Step 5: Uninstall and system cleanup
    [25] Uninstall all targets
    [26] Unbind NICs from IGB UIO or VFIO driver
    [27] Remove IGB UIO module
    [28] Remove VFIO module
    [29] Remove KNI module
    [30] Remove hugepage mappings

    [31] Exit Script

    You need to select an option and configure it.

    1. Step 1 requires you to select DPDK environment build. You could select x86_64-native-linuxapp-gcc which happens to be option 10.
    2. In step 2 you need to set up the Linux app environment. Option 13, loads the latest IGB UIO module and compiles the latest IGB UIO driver. IGB UIO is a DPDK kernel module which deals with PCI enumeration and handles links status interrupts in user mode, instead of being handled by the kernel. You also need to set up Huge page mappings, e.g. 2M huge page for NUMA, option 17. Option 18 will display the current Ethernet settings, just like in step 6 above. Using option 19, unbind the desired network card from Linux kernel driver and bind it to the IGB UIO module installed using option 13.

    Caution: Do not bind the network card used for external connectivity to DPDK as this will result in you losing connection to your device.

    For details on the setup, see the video Chapter 2: Configuring DPDK, which is part of the Network Builder University’s DPDK Intro course.

  8. Compile a sample application l2fwd. This is a layer 2 forwarding application which forwards packets based on MAC addresses and not based on IP addresses.

    cd examples
    cd l2fwd
    export RTE_SDK=<dpdk install directory>
  9. Run your sample application.

    Usage: ./build/l2fwd -c COREMASK|-l CORELIST -n CHANNELS [options]

    ./build/l2fwd –c 0x3 –n 4 -- -p 0x3

    -c (hexadecimal bit mask of cores you want to run one)
    e.g. –c Ox3 means run on both cores because binary 11 = 0x3

    -n (number of memory channels)
    e.g. –n 4 means run on all four channels available on Intel Xeon processor

    -p (port mask)
    -p 0x3 means run on both ports which are bound to dpdk because binary 11 = 0x3

    For the purpose of our test this sample application forwards any packet coming on port 0 to port 1, and vice versa. After you run it goes into polling loop where it polls both ports and updates statistics such as packets received and sent every 10 seconds (seen in the last screen shot below). The first screen shot shows EAL (Environment Abstraction Layer) going through all the logical cores which map to physical cores. The second screen shot shows the PCI devices /network cards which are attached to DPDK (IGB UIO driver, step 7 -> 2 above) and hence using rte_ixgbe_pmd driver which is DPDK polled mode driver and those which are not being managed by it. Screen shot 3 shows one logical core dedicated to each physical port.

For details on execution options, see the video Chapter 3: Compiling and Running a Sample Application in the Network Builder University’s DPDK Intro course.

The sample application shows basic Layer 2 forwarding functionality. If you want to benchmark DPDK and see how it performs, see the Intel Network Builder University course Using DPPD PROX, where Luc Provoost, engineering manager in Intel’s Network Platforms Group, benchmarks a virtual network function using the DPPD (Data Plane Performance Demonstrator) Prox to help software developers understand and use these tools.


DPDK is a software accelerator which runs in user space, bypassing the Linux kernel and providing access to NICs , CPUs and memory for a packet processing application. In this article we guided the user step by step on how to download DPDK 2.1.0 on the Linux platform, compile, configure it, and run a sample application.

To understand the references terms used in the videos above and for details on the DPDK’s features and programs, refer to the following:

  • DPDK 101. In this Intel Network Builders University course, Andrew Duignan, platform applications engineer at Intel, provides an overview of the DPDK (based on version 2.1), covering licensing, packet processing concepts, DPDK component libraries, Intel® architecture memory issues, and DPDK memory setup.
  • DPDK 201. In this Intel Network Builders University course, MJay, lead platform engineer at Intel, provides an overview and design philosophy of the DPDK (based on version 2.1), the key features and the reasoning behind those key features, and then reviews how sample applications are developed.

Have a question? The SDN/NFV Forum is the perfect place to ask.

About the Author

Sujata Tibrewala is a Networking Community Manager and Developer Evangelist at Intel. She has worked on network software development in the industry and is having fun today working with technologies which optimizes commodity hardware for Networking functions using SDN and NFV .

For more complete information about compiler optimizations, see our Optimization Notice.