How to ping traffic via DPDK bound interface

How to ping traffic via DPDK bound interface

*/

./dpdk_nic_bind.py --status

 

Network devices using DPDK-compatible driver

============================================

0000:00:09.0 'Virtio network device' drv=igb_uio unused=vfio-pci

0000:00:0a.0 'Virtio network device' drv=igb_uio unused=vfio-pci

 

Network devices using kernel driver

===================================

0000:00:03.0 'Virtio network device' if=ens3 drv=virtio-pci unused=igb_uio,vfio-pci *Active*

0000:00:07.0 'Virtio network device' if=ens7 drv=virtio-pci unused=igb_uio,vfio-pci 

0000:00:08.0 'Virtio network device' if=ens8 drv=virtio-pci unused=igb_uio,vfio-pci 

 

Other network devices

=====================

<none>

 

I have two eth interfaces bound to DPDK. How can I find out if these interfaces are connected to the external world? How are these expected to be bound to the physical interface? 

 

Looking for something equivalent of "ping -I <interface> <destination id>"

6 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Once you unbind those nics from the kernel driver and bind them to the igb_uio driver, they no longer have access to the linux kernel stack tools. They still have a MAC address that you can use to address them at layer 2, but they don't have an IP address assigned to them at layer 3. 

Here are a few links that might be helpful:

http://dpdk.org/doc/api/: Documentation on how to roll your own layer 3 (and upward) functionality. Because you no longer have access to the nifty linux networking stack tools, you have to roll your own. ;-)

http://dpdk.org/doc/guides/prog_guide/kernel_nic_interface.html: This is probably more what you are looking for. Allows you to use Linux networking tools like ifconfig and ethtool on DPDK enabled interfaces. 

If you want the two DPDK-enabled interfaces to talk with each other, you can use tools like pktgen and testpmd that can address the interfaces via their ethernet MAC addresses, but once those interfaces have been bound to the igb_uio driver, it's challenging to actually get the MAC addresses. (To solve that problem, I've found it easiest to write down the MAC addresses for the interfaces BEFORE you unbind them from the kernel driver, and then bind to igb_uio. There probably is a way to do it via the DPDK tools after you've bound the NICs to igb_uio, but I haven't figured that out yet...)

Hope this helps!

\\"Perhaps travel cannot prevent bigotry, but by demonstrating that all peoples cry, laugh, eat, worry, and die, it can introduce the idea that if we try and understand each other, we may even become friends.\\" Maya Angelou

We are using DPDK with Mellanox NIC's and in the end, instead of using Bifurcation or KNI, we just manually respond to PING and ARP in our application.

The packet format of PING and ARP are very simple and it's easy enough to respond appropriately.

This area is one of the things that confuses me about DPDK. The interaction with other applications and the kernel.

For instance, when our DPDK application is running we have found that other programs, such as ptpdv2, can successfully receive multicast packets over the ports currently being used for DPDK, but can't receive any unicast traffic.

I also found a presentation that said the Mellanox PMD's bifuricate automatically, and all kernel sockets in other apps should work fine, but if we don't manually handle PING, ARP and IGMP, then these services fail.

Hi Richard,

I found some of the same content that you must have regarding the Mellanox PMD and how it automatically bifurcates non-app directed traffic.  The item I'm looking at even specifically calls out ping and arp as protocols that are passed through to the kernel network stack automatically.  

I have two suggestions:

1) Could there be asymmetric routing or filtering issues in your kernel netfilter that prevent the return path of the ping/arp reply?  Perhaps some judicious sniffing might find something.  Although obviously if your app is receiving these packets they probably aren't arriving at the kernel at all anyhow.

2) I would definitely suggest directing your question to both Mellanox directly and to the DPDK usage mailing list.  You can register here:  https://dpdk.org/ml/listinfo/users and search the archives here: http://dpdk.org/ml

Cheers,

Jim Chamings, Intel DRD Datacenter Scale Engineering

 

Hi James,

Yes I suspect it's something todo with the Flow Director being used to filter traffic, and somehow catching other packets that we aren't interested in.

I will give the mailing list a try.

thanks,

Richard.

Before asking the list, I did find this on the DPDK site which explains it.

This capability allows the PMD to coexist with kernel network interfaces which remain functional, although they stop receiving unicast packets as long as they share the same MAC address. 

from http://dpdk.org/doc/guides/nics/mlx5.html 

Leave a Comment

Please sign in to add a comment. Not a member? Join today