Fortville DPDK pmd driver Performance

Fortville DPDK pmd driver Performance

We have i40e driver on the Host and i40e dpdk pmd on the vm guest. We have 25G 2 port Nic fortville card.

We are seeing performance degradation when we use both the ports on a nic. We can do only 20G when both the ports are used and can push it to 23G when only one port in a nic is used.

Intel DPDK documentations  says use the DPDK config  CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC  if we need to drive both ports on a  nic. The problem is there is no way of setting 16 byte descriptor on the Guest PMD  and host i40e driver simultaneously. So we couldnt use the 16 byte descriptors.

Is there a way to do this?

 

7 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

 

Hello,

I'm wondering why not use DPDK PMD on the host (with DPDK-enabled OpenVSwitch or similar), instead of running PMD in the guest? 

I think the native kernel driver will likely get in the way if you are using it on the host, unless you were to use SR-IOV to bypass it.

Cheers,

Jim Chamings
Intel DRD

We are using i40e sriov . We have the i40e pf driver on the host and have 8 vfs derived and given to the guest. The guest is running i40e dpdk pmd .
 

Ah, yes that does change things!  I'll have a look and see what I can dig up on this.

 

Thanks,

-J
 

 

Would your 8 VFs all be coming from the same PF?  Or are they split between multiple PFs?

-J

 

We currently have the 25G 2 port nic. We instantiate 8 Vfs per port (PF) on both ports. We would end up using 4 Vfs , 2 in each PF usually.For the performance test we will have 2 Vfs  per port on both the ports  .

PF 1   >  Vf1 would carry 24 gig rxtx , Vf2 minor heartbeats

Pf 2 > Vf1 would carry 24gig rxtx , Vf2 minor heartbeats

 

 

 

Any luck with getting an answer on this?

Leave a Comment

Please sign in to add a comment. Not a member? Join today