Testing DPDK Performance and Features with TestPMD

Download PDF

This article describes the Data Plane Development Kit (DPDK) TestPMD application. It shows you how to build and configure TestPMD, and how to use it to check the performance and features of different network devices using DPDK.

TestPMD is one of the reference applications distributed with the DPDK package. Its main purpose is to forward packets between Ethernet ports on a network interface. In addition, it allows the user to try out some of the features of the different drivers such as RSS, filters, and Intel® Ethernet Flow Director.

We will also look at the TestPMD runtime command line which can be used to configure packet forwarding between ports and other features supported by the network interface. The TestPMD application works with all versions of DPDK.

Sample Setups for TestPMD

To demonstrate the use of TestPMD we will consider two typical hardware setups.

In the first setup, shown in Figure 1, the TestPMD application is used with two Ethernet ports connected to an external traffic generator. This allows the user to test throughput and features under different network workloads.

External Traffic
Figure 1.Setup 1 – With an external traffic generator.

In the second setup the TestPMD application is used with two Ethernet ports in a loopback mode. This allows the user to check the reception and transmission functionality of the network device without the need for an external traffic generator.


Figure 2. Setup 2 – TestPMD in loopback mode.

Forwarding Modes

TestPMD has different forwarding modes that can be used within the application.

  • Input/output mode: This mode is generally referred to as IO mode. It is the most common forwarding mode and is the default mode when TestPMD is started. In IO mode a CPU core receives packets from one port (Rx) and transmits them to another port (Tx). The same port can be used for reception and transmission if required.
  • Rx-only mode: In this mode the application polls packets from the Rx ports and frees them without transmitting them. In this way it acts as a packet sink.
  • Tx-only mode: In this mode the application generates 64-byte IP packets and transmits them from the Tx ports. It doesn’t handle the reception of packets and as such acts as a packet source.

These latter two modes (Rx-only and Tx-only) are useful for checking packet reception and transmission separately.

Apart from these three modes there are other forwarding modes that are explained in the TestPMD documentation.

Compiling and Preparing TestPMD

The following steps are used to compile and set up the TestPMD application:

  1. Compile DPDK from the source directory. This also compiles the TestPMD application:

    $ make config T=x86_64-native-linuxapp-gcc
    $ make
  2. Initialize the kernel module uio:

    $ sudo modprobe uio
  3. Insert the kernel module igb_uio:

    $ sudo insmod ./build/kmod/igb_uio.ko
  4. Reserve hugepage memory for use by the DPDK TestPMD application. The easiest way to do this is by using the dpdk-setup.sh tool that comes with DPDK (refer to the DPDK Getting Started Guide for more information on this):

    $ sudo ./usertools/dpdk-setup.sh
  5. Bind the network interface ports to igb_uio. For this example we will assume that the ports to be used have PCI addresses of 0000:83:00.1 and 0000:87:00.1:

    $ sudo ./usertools/dpdk-devbind.py –b igb_uio 0000:83:00.1 0000:87:00.1

Running TestPMD

TestPMD can be run in non-interactive mode using a series of command-line parameters. It can also be run in interactive mode, using the -i option, to get a runtime command line. The runtime command line allows dynamic configuration of TestPMD:

$ sudo ./build/app/testpmd –l 12,13,14 –n 4 -- -i

In this example the -l option specifies the logical cores. Core 12 is used to manage the command line and cores 13 and 14 will be used to forward packets. The -n option is used to specify the number of memory channels for the system. The double dash separates the DPDK Environment Abstraction Layer (EAL) commands from the TestPMD application commands, in this case the -i option for interactive mode. When the application is run you will see some output like the following:

$ sudo ./build/app/testpmd –l 12,13,14 –n 4 -- -i

EAL: Detected 40 lcore(s)
EAL: Probing VFIO support...
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:83:00.1 on NUMA socket 1
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:87:00.0 on NUMA socket 1
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:87:00.1 on NUMA socket 1
EAL:   probe driver: 8086:10fb net_ixgbe
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>:
       n=163456, size=2176, socket=0
Configuring Port 0 (socket 0)
Port 0: 00:1B:21:B3:44:51
Configuring Port 1 (socket 0)
Port 1: 00:1B:21:57:EE:71
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd>

The testpmd> prompt allows the user to input commands. This is referred to as the runtime command line. For example, we can use this to check the forwarding configuration:

testpmd> show config fwd
io packet forwarding - ports=2 - cores=1 - streams=2 
   - NUMA support disabled, MP over anonymous pages disabled
Logical Core 13 (socket 1) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

This shows that TestPMD is using the default io forwarding mode, as described above. It also shows that core number 13 (which was the second core that we enabled from the ) is going to poll packets from port 0, forward them to port 1, and vice versa. The first core on the command line, 12, is being used to handle the runtime command line itself.

To start forwarding, just type in start:

testpmd> start

Then, to check that traffic is being forwarded between the ports, run the following command to show the statistics for all the ports that the application is using:

testpmd> show port stats all
 
################### NIC statistics for port 0  ######################
  RX-packets: 8480274    RX-missed: 0          RX-bytes:  508816632
  RX-errors:  0
  RX-nombuf:  0
  TX-packets: 5763344    TX-errors: 0          TX-bytes:  345800320

  Throughput (since last show)
  Rx-pps:      1488117
  Tx-pps:      1488116
#####################################################################

################### NIC statistics for port 1  ######################
  RX-packets: 5763454    RX-missed: 0          RX-bytes:  345807432
  RX-errors:  0
  RX-nombuf:  0
  TX-packets: 8480551    TX-errors: 0          TX-bytes:  508832612

  Throughput (since last show)
  Rx-pps:      1488085
  Tx-pps:      1488084
 ####################################################################

This output shows the total number of packets handled by the application since the start of packet forwarding, with the number of packets received and transmitted by both ports. The traffic rate is displayed in packets per second (pps). In this example, all the traffic received at the ports is being forwarded at the theoretical line rate of 14.88 million pps. The line rate is the maximum speed for a given packet size and network interface.

To stop forwarding, simply type in stop. This stops the forwarding and displays the accumulated statistics for both ports, as well as an overall summary.

testpmd> stop

Telling cores to stop...
Waiting for lcores to finish...

------------------ Forward statistics for port 0  ----------------------
RX-packets: 136718750      RX-dropped: 0             RX-total: 136718750
TX-packets: 136718750      TX-dropped: 0             TX-total: 136718750
------------------------------------------------------------------------

------------------ Forward statistics for port 1  ----------------------
RX-packets: 136718750      RX-dropped: 0             RX-total: 136718750
TX-packets: 136718750      TX-dropped: 0             TX-total: 136718750
------------------------------------------------------------------------

+++++++++++ Accumulated forward statistics for all ports +++++++++++++++
RX-packets: 273437500      RX-dropped: 0             RX-total: 273437500
TX-packets: 273437500      TX-dropped: 0             TX-total: 273437500
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Using Multiple Cores

For cases where a single core is not enough to forward all of the incoming traffic, multiple cores can be used to handle packets from different ports.

In the previous example, cores 13 and 14 were available for forwarding packets, but only core 13 was used. To enable the other core we can use the following command:

testpmd> set nbcore 2

testpmd> show config fwd

io packet forwarding - ports=2 - cores=2 - streams=2 
   - NUMA support disabled, MP over anonymous pages disabled
Logical Core 13 (socket 1) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
Logical Core 14 (socket 1) forwards packets on 1 streams:
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

Now core 13 will receive packets from port 0 and transmit them on port 1, and core 14 will receive packets from port 1 and transmit them on port 0.

Changing the Forwarding Mode

As described above, TestPMD has different forwarding modes. To change the forwarding mode to Rx-only we can use the set fwd command:

testpmd> set fwd rxonly
testpmd> start

Now if we look at the port statistics we see that only the received packets are shown. Since there are no transmitted packets the Tx statistics remain at 0:

testpmd> show port stats all
################### NIC statistics for port 0  ######################
RX-packets: 524182888  RX-missed: 0          RX-bytes:  31450974816
RX-errors:  0
RX-nombuf:  0
TX-packets: 0          TX-errors: 0          TX-bytes:  0

Throughput (since last show)
Rx-pps:     14880770
Tx-pps:            0
#####################################################################

################### NIC statistics for port 1  ######################
RX-packets: 486924876  RX-missed: 0          RX-bytes:  29215494352
RX-errors:  0
RX-nombuf:  0
TX-packets: 0          TX-errors: 0          TX-bytes:  0

Throughput (since last show)
Rx-pps:     14880788
Tx-pps:            0
#####################################################################

Getting Help in TestPMD

TestPMD has online help for the commands that are available at run time. These are divided into sections and can be accessed using help.

testpmd> help

Help is available for the following sections:

help control    : Start and stop forwarding.
help display    : Displaying port, stats and config information.
help config     : Configuration information.
help ports      : Configuring ports.
help registers  : Reading and setting port registers.
help filters    : Filters configuration help.
help all        : All of the above sections.

For example, to get help on the commands that display statistics and other information:

testpmd> help display
Display:
--------

show port (info|stats|xstats|fdir|stat_qmap|dcb_tc|cap) (port_id|all)
    Display information for port_id, or all.

show port X rss reta (size) (mask0,mask1,...)
    Display the rss redirection table entry indicated by masks on port X. size is used to indicate the hardware supported reta size

show port rss-hash ipv4|ipv4-frag|ipv4-tcp|ipv4-udp|ipv4-sctp|ipv4-other|ipv6|ipv6-frag|ipv6-tcp|ipv6-udp|ipv6-sctp|ipv6-other|l2-payload|ipv6-ex|ipv6-tcp-ex|ipv6-udp-ex [key]
    Display the RSS hash functions and RSS hash key of port X

clear port (info|stats|xstats|fdir|stat_qmap) (port_id|all)
    Clear information for port_id, or all.

show (rxq|txq) info (port_id) (queue_id)
    Display information for configured RX/TX queue.

show config (rxtx|cores|fwd|txpkts)
    Display the given configuration.

read rxd (port_id) (queue_id) (rxd_id)
    Display an RX descriptor of a port RX queue.

read txd (port_id) (queue_id) (txd_id)
    Display a TX descriptor of a port TX queue.

Conclusion

In this article we have looked at how to compile, set up, and run TestPMD and how to configure it through the runtime command line.

Additional Information

For more information on DPDK see the general DPDK documentation, and for more information on TestPMD itself, see the DPDK TestPMD Application User Guide.

See a video that covers the information in this article at Intel® Network Builders, in the DPDK Training course Testing DPDK Performance and Features with TestPMD. You may have to become a member of Intel Network Builders, which will give you access to a wide range of courses covering DPDK and other Networking topics.

About the Author

Pablo de Lara Guarch is a network software engineer with Intel. His work is primarily focused on development of data plane functions and libraries for DPDK. His contributions include hash algorithm enhancements and new crypto drivers. He also maintains the DPDK crypto subtree.

For more complete information about compiler optimizations, see our Optimization Notice.

9 comments

Top
B, Kenvish's picture

Hi Pablo,

I am new to DPDK and need help for one of my blocker issue related to testpmd.

Issue: testpmd crashes with below logs

./testpmd 
traps: testpmd-cc[4954] trap invalid opcode ip:47409e sp:7ffe7466de68 error:0 in testpmd-cc[400000+67f000]
Illegal instruction (core dumped)

I am trying to CrossCompile and run DPDK for one of my target arch - x86_64.

DPDK version : 18.05

Compilation steps:

1) make config CROSS=/opt/wios/gcc-5.3.0-glibc-2.21-4.14.30/x86_64-wios-linux-gnu/bin/x86_64-wios-linux-gnu- T=x86_64-native-linuxapp-gcc RTE_KERNELDIR=/home/symbol/kenvish/wing7_vpp/wing/obj/nuxi-bare_metal-4.14.30/src/kernel/4.14.30-ws-symbol/linux-4.14.30 V=2

2) make install CROSS=/opt/wios/gcc-5.3.0-glibc-2.21-4.14.30/x86_64-wios-linux-gnu/bin/x86_64-wios-linux-gnu- T=x86_64-native-linuxapp-gcc RTE_KERNELDIR=/home/symbol/kenvish/wing7_vpp/wing/obj/nuxi-bare_metal-4.14.30/src/kernel/4.14.30-ws-symbol/linux-4.14.30 V=2

Target Details:

$ uname -a
Linux nx9000-6C883E 4.14.30-ws-symbol #1 SMP PREEMPT Thu Sep 20 15:04:33 IST 2018 x86_64 GNU/Linux

Testpmd binary details:

$ file testpmd
testpmd: ELF 64-bit LSB  executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, not stripped

DPDK-init script before running testpmd. (All the commands in the script gets executed without any error)

#!/bin/sh

/sbin/insmod /lib/modules/4.14.30-ws-symbol/kernel/drivers/net/ethernet/intel/igb/igb.ko.lzma

/sbin/insmod /lib/modules/4.14.30-ws-symbol/kernel/drivers/uio/uio.ko.lzma
/sbin/insmod /lib/modules/4.14.30-ws-symbol/kernel/drivers/uio/uio_pci_generic.ko.lzma
/sbin/insmod /lib/modules/4.14.30-ws-symbol/extra/dpdk/igb_uio.ko.lzma

/share/dpdk/usertools/dpdk-devbind.py --status
/share/dpdk/usertools/dpdk-devbind.py --bind=igb_uio eth0
/share/dpdk/usertools/dpdk-devbind.py --bind=igb_uio eth1
/share/dpdk/usertools/dpdk-devbind.py --status

echo 64 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge

Let me know if some more information is required.

Thanks in advance,

--Kenvish

Mheni M.'s picture

hey Pablo,

I'm working with Khalid on this project, Thank you for your response the links you provided are very interesting,

i just want to point out that the hardware we have has the following characteristics:

     -  48 CPUs divided on 2 sockets

     -  512 Go RAM

     -  and, 10-Gigabit network interfaces

our goal is to saturate the link from VMs, while keeping the number of Vms as low as we can, for now, the maximum we can send from a Vm is 1G-bit traffic and that doesn't seem to be affected by the number of cores we give to the Vms.

we will dig more into this and the links you have sent and keep you informed how it goes,

thank you again, that was really helpfull

- Mheni

 

Pablo D.'s picture

Hi Khalid,

To make sure that you are maximizing the performance, make sure that you are following the steps in this guide: http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html, (even though this is focused on a dual socket system).

Regarding to the VM question, I don't think you need more than one thread to saturate the ports, as the ports are only 1G. Anyway, you could also use multiple queues, using the PF, to be able to use multiple cores on the same port. I don't have any more documentation about KVM/Xen (apart from http://dpdk.org/doc/guides/xen/pkt_switch.html), but feel free to ask more questions about this on users@dpdk.org. to have a broader audience.

Pablo

 

KHALID H.'s picture

Hi Pablo , thanks for your response.

I have recently used the NetGate SG2220 DPDK box. it comes with 2 dpdk capable ports and it is running a lightweight cent os with testpmd installed.

i connected 2 machines to the NetGate SG2220 DPDK box , both machines run pktgen. in the configuration of the pktgen machines i have set the destination mac address. 

i have been able send packets from one machine to the other through the the NetGate SG2220 DPDK box that runs testpmd using fwd io mode and it works as expected :)

do you have any suggestions or methods to maximize throughput ?. Also i am willing to use multiple VM machines that run on top of KVM to saturate my links and make the best use of DPDK performace, are there any documents for a similar experiment using KVM or Xen ?. 

Thanks. 

Pablo D.'s picture

Hi Khalid. I don't understand why you are not receiving packets in testPMD, if you were receiving them when using rxonly. The RX functionality should be the same.

What I do expect is packets not to be transmitted (and therefore, dropped packets at RX), if you are using VMs. "IO" mode doesn't change the source or destination MAC addresses, but just forward the packets as they are (so --eth-peer parameter is useless here).

I think for you case, you would need to use the "macfwd" mode, which sets the source MAC address to the TX port's MAC address and the destination MAC address to the one that you set with "eth-peer".

Pablo

KHALID H.'s picture

thank you Pablo for you responsiveness. I tried the rxonly mode and it worked as you mentioned.

i m currently trying to test the fwd io mode.

my setup is composed of 2 VMs , one runs testpmd and the other runs pktgen.

they are connected using Virtual BOX bridged network interfaces and ping each other. this was the same setup i used to test rxonly fwd mode and it worked just fine for rxonly mode. but fwd io mode is not working for some reason.  

i used the following command to test the fwd io mode on the testpmd side : 

sudo ./testpmd -c 0x3 -n 2 -- -i -a --eth-peer=0,08:00:27:0e:2b:f9 --eth-peer=1,08:00:27:28:f6:70 --forward-mode=io 

the mac addresses used in the eth-peer options are the pktgen ports mac addresses. 

showing stats i couldn't see any packets being received at the testpmd level, or at the pktgen level. 

i m I missing something ?. 

Thank you again. 

Shepard S.'s picture

Pablo, this is an excellent "gentle" introduction to using TestPMD. Thanks for sharing, as this kind of explanation is great to help explain DPDK. -Shep

Pablo D.'s picture

Hi Khalid. Thanks for your question. The packets may come from an external traffic generator (this case) or from another port in the same machine (using a software traffic generator, such as DPDK pktgen).

KHALID H.'s picture

Hi , I have a question concerning the rxonly Mode. We can see that both ports receive packets, but from where these packets originate ? . Thank you. 

Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.