How to Set Up Intel® Ethernet Flow Director


Intel® Ethernet Flow Director (Intel® Ethernet FD) directs Ethernet packets to the core where the packet consuming process, application, container, or microservice is running. It is a step beyond receive side scaling (RSS) in which packets are sent to different cores for interrupt processing, and then subsequently forwarded to cores on which the consuming process is running.

Intel Ethernet FD supports advanced filters that direct received packets to different queues, and enables tight control on flow in the platform. It matches flows and CPU cores where the processing application is running for flow affinity, and supports multiple parameters for flexible flow classification and load balancing. When operating in Application Targeting Routing (ATR) mode, Intel Ethernet FD is essentially the hardware offloaded version of Receive Flow Steering available on Linux* systems, and when running in this mode, Receive Packet Steering and Receive Flow Steering are disabled.

It provides the most benefit on Linux bare-metal usages (that is, not using virtual machines (VMs)) where packets are small and traffic is heavy. And because the packet processing is offloaded to the network interface card (NIC), Intel Ethernet FD could be used to avert denial-of-service attacks.

Supported Devices

Intel Ethernet FD is supported on devices that use the ixgbe driver, including the following:

  • Intel® Ethernet Converged Network Adapter X520
  • Intel® Ethernet Converged Network Adapter X540
  • Intel® Ethernet Controller 10 Gigabit 82599 family

It is also supported on devices that use the i40e driver:

  • Intel® Ethernet Controller X710 family
  • Intel® Ethernet Controller XL710 family

The Data Plane Development Kit (DPDK) includes support for Intel Ethernet FD on the devices listed above. See the DPDK documentation for how to use DPDK and testpmd with Intel Ethernet FD.

In order to determine whether your device supports Intel Ethernet FD, use the ethtool command with the --show-features or -k parameter on the network interface you want to use:

# ethtool --show-features <interface name> | grep ntuple

Screenshot of using ethool command to detect Intel Flow Director support.

If the ntuple-filters feature is followed by off or on, Intel Ethernet FD is supported on your Ethernet adapter. However, if the ntuple-filters feature is followed by off [fixed], Intel Ethernet FD is not supported on your network interface.

Enabling Intel® Ethernet Flow Director

Driver Parameters for Devices Supported by the ixgbe Driver

On devices that are supported by the ixgbe driver, there are two parameters that can be passed-in when the driver is loaded into the kernel that will affect Intel Ethernet FD:

  • FdirPballoc
  • AtrSampleRate 


This driver parameter specifies the packet buffer size allocated to Intel Ethernet FD. The valid range is 1–3, where 1 specifies that 64k should be allocated for the packet buffer, 2 specifies a 128k packet buffer, and 3 specifies a 256k packet buffer. If this parameter is not explicitly passed to the driver when it is loaded into the kernel, the default value is 1 for a 64k packet buffer.


The AtrSampleRate parameter indicates how many Tx packets will be skipped before a sample is taken. The valid range is from 0 to 255. If the parameter is not passed to the driver when it is loaded into the kernel, the default value is 20, meaning that every 20th packet will be sampled to determine if a new flow should be created. Passing a value of 0 will disable ATR mode, and no samples will be taken from the Tx queues.

The above driver parameters are not supported on devices that use the i40e driver.

To enable these parameters, first unload the ixgbe module from the kernel. Note, if you are connecting to the system over ssh, this may disconnect your session:

# rmmod ixgbe

Then re-load the ixgbe driver into the kernel with the desired parameters listed above:

# modprobe ixgbe FdirPballoc=3,2,2,3 AtrSampleRate=31,63,127,255

Note that, in this example, for each parameter there are four values. This is because on my test system, I have two network adapters that are using the ixgbe driver--an Intel Ethernet Controller 10 Gigabit 82599, and an Intel® Ethernet Controller 10 Gigabit X540--each of which has two ports. The order in which the parameters are applied is in PCI Bus/Device/Function order. To determine the PCI BDF order on your system, use the following command:

# lshw -c network -businfo

Screenshot of lshw command showing PCI Bus, Device Function information for NICs

Based on this system configuration, using the modprobe command above, the Intel Ethernet Controller 10 Gigabit X540-AT2 port at PCI address 00:03.0 is allocated the FdirPballoc and AtrSampleRate parameters of 3 and 31, respectively, and the Intel Ethernet Controller 10 Gigabit 82599 port at PCI address 81:00.1 is allocated the FdirPballoc and AtrSampleRate parameters of 3 and 255, respectively.

Once you have determined that your Intel branded server network adapter supports Intel Ethernet FD and you have loaded the desired parameters into the driver (on supported models), execute the following command to enable Intel Ethernet FD:

# ethtool --features enp4s0f0 ntuple on

Screenshot of using ethtool command to turn Intel Flow Director on

Because the commands below only indicate which Rx queue a matched packet should be sent to, ideally an additional step should be taken to pin both Rx queues and the process, application, or container that is consuming the network traffic to the same CPU. Pinning an application/process/container to a CPU is beyond the scope of this document, but it can be done using the taskset command. Pinning IRQs to a CPU can be done using the set_irq_affinity script that is included with the freely available sources of the i40e and ixgbe drivers. See Intel Support: Drivers and Software for the latest versions of these drivers. See also the IRQ Affinity section in this tuning guide for how to set IRQ affinity.

Using Intel Ethernet Flow Director

Intel Ethernet FD can run in one of two modes: externally programmed (EP) mode, and ATR mode. Once Intel Ethernet FD is enabled as shown above, ATR mode is the default mode, provided that the driver is in multiple Tx queue mode. When running in EP mode, the user or management/orchestration software can manually set how flows are handled. In either mode, fields are intelligently selected from the packets in the Rx queues to index into the Perfect-Match filter table. For more information on how Intel Ethernet FD works, see this whitepaper.

Application Targeting Routing

In ATR mode, Intel Ethernet FD uses fields from the outgoing packets in the Tx queues to populate the 8K-entry Perfect-Match filter table. The fields that are selected depend on the packet type; for example, fields to filter TCP traffic will be different than those used to filter user diagram protocol (UDP) traffic. Intel Ethernet FD then uses the Perfect-Match filter table to intelligently route incoming traffic to the Rx queues.

To disable ATR mode and switch to EP mode, simply use the ethtool command shown under Adding Filters to manually add a filter, and the driver will automatically enter EP mode. To automatically re-enable ATR mode, use the ethtool command under Removing Filters until the Perfect-Match filter table is empty.

Externally Programmed Mode

When Intel Ethernet FD runs in EP mode, flows are manually entered by an administrator or by management/orchestration software (for example, OpenFlow*). As mentioned above, once enabled, Intel Ethernet FD automatically enters EP mode when a flow is manually entered using the ethtool command listed under Adding Filters.

Adding Filters

The following commands illustrate how to add flows/filters to Intel Ethernet FD using the -U,
-N, or --config-ntuple switch to ethtool.

To specify that all traffic from to be placed in queue 4, issue this command:

# ethtool --config-ntuple flow-type tcp4 src-ip dst-ip action 4

Note: Without the ‘loc’ parameter, the rule is placed at position 1 of the Perfect-Match filter table. If a rule is already in that position, it is overwritten.

Forwards to queue 2 all IPv4 TCP traffic from that is going to, placing the filter at position 33 of the Perfect-Match filter table (and overwriting any rule currently in that position):

# ethtool --config-ntuple <interface name> flow-type tcp4 src-ip dst-ip src-port 2000 dst-port 2001 action 2 loc 33

Drops all UDP packets from

# ethtool --config-ntuple flow-type udp4 src-ip action -1

Note: The VLAN field is not a supported filter with the i40e driver (Intel Ethernet Controller XL710 and Intel Ethernet Controller X710 NICs).

For more information and options, see the ethtool man page documentation on the -U, -N, or --config-ntuple option.

Note: The Intel Ethernet Controller XL710 and the Intel Ethernet Controller X710, of the Intel® Ethernet Adapter family, provide extended cloud filter flow support for more complex cloud networks. For more information on this feature, please see the Cloud Filter Support section in this ReadMe document, or in the ReadMe document in the root folder of the i40e driver sources.

Removing Filters

In EP mode, to remove a filter from the Perfect-Match filter table, execute the following command against the appropriate interface. ‘N’ in the rule below is the numeric location in the table that contains the rule you want to delete:

# ethtool --config-ntuple <interface name> delete N

Listing Filters

To list the filters that have been manually entered in EP mode, execute the following command against the desired interface:

# ethtool --show-ntuple <interface name>

Disabling Intel Ethernet Flow Director

Disabling Intel Ethernet FD is done with this command:

# ethtool --features  enp4s0f0 ntuple off

This flushes all entries from the Perfect-Filter flow table.


Intel Ethernet FD directs Ethernet packets to the core where the packet consuming process, application, container, or microservice is running. This functionality is a step beyond RSS, in which packets are simply sent to different cores for interrupt processing, and then subsequently forwarded to cores on which the consuming process is running. It can be explicitly programmed by administrators and control plane management software, or it can intelligently sample outgoing traffic and automatically create Perfect-Match filters for incoming packets. When operating in automatic ATR mode, Intel Ethernet FD is essentially the hardware offloaded version of Receive Flow Steering available on Linux systems.

Intel Ethernet FD can provide additional performance benefit, particularly in workloads where packets are small and traffic is heavy (for example, in Telco environments). And because it can be used to filter and drop packets at the network interface card (NIC), it could be used to avert denial-of-service attacks.



Intel® 82599 10 GbE Controller Datasheet

ixgbe Linux* Base Driver for Intel(R) Ethernet Network Connection

i40e Linux* Base Driver for the Intel(R) XL710 Ethernet Controller Family

Linux* Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Family of

Flow Bifurcation How-to Guide at

SR-IOV Configuration Guide - Intel® Ethernet CNA X710 & XL710 on Red Hat* Enterprise Linux 7*

Creating Virtual Functions Using SR-IOV

Also, view the ReadMe file found in the root directory of both the i40e and ixgbe driver sources.

For more complete information about compiler optimizations, see our Optimization Notice.


Eugene S.'s picture

I have problem using ethernet flow director on i350 NIC, recent kernel (under Ubuntu) and recent driver. I always getting error: 

ethtool -N enp1s0f0 flow-type udp4 src-ip action -1
rmgr: Cannot insert RX class rule: Invalid argument

Also in your example above there is missing <interface name>, must be: ethtool --config-ntuple <interface name> flow-type udp4 src-ip action -1

More details from my system:

ifconfig enp1s0f0

enp1s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::ec4:7aff:fe13:2834  prefixlen 64  scopeid 0x20<link>
        ether 0c:c4:7a:13:28:34  txqueuelen 1000  (Ethernet)
        RX packets 67  bytes 22110 (22.1 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 146  bytes 29380 (29.3 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xf7300000-f73fffff  

root@ubuntu:~# ethtool -K enp1s0f0 ntuple on

root@ubuntu:~# ethtool -k enp1s0f0 
Features for enp1s0f0:
rx-checksumming: on
tx-checksumming: on
        tx-checksum-ipv4: off [fixed]
        tx-checksum-ip-generic: on
        tx-checksum-ipv6: off [fixed]
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: on
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
        tx-tcp-segmentation: on
        tx-tcp-ecn-segmentation: off [fixed]
        tx-tcp-mangleid-segmentation: off
        tx-tcp6-segmentation: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: on
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on
tx-gre-csum-segmentation: on
tx-ipxip4-segmentation: on
tx-ipxip6-segmentation: on
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
tx-gso-partial: on
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: on
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: off [fixed]
tls-hw-tx-offload: off [fixed]
tls-hw-rx-offload: off [fixed]
rx-gro-hw: off [fixed]
tls-hw-record: off [fixed]
# ethtool -i enp1s0f0
driver: igb
version: 5.4.0-k
firmware-version: 1.59, 0x800008f8
bus-info: 0000:01:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

# modinfo igb
filename:       /lib/modules/4.19.0-999-generic/updates/drivers/net/ethernet/intel/igb/igb.ko
license:        GPL
description:    Intel(R) Gigabit Ethernet Linux Driver
author:         Intel Corporation, <>
srcversion:     EA776187819037095B690C0
alias:          pci:v00008086d000010D6sv*sd*bc*sc*i*
alias:          pci:v00008086d000010A9sv*sd*bc*sc*i*
alias:          pci:v00008086d000010A7sv*sd*bc*sc*i*
alias:          pci:v00008086d000010E8sv*sd*bc*sc*i*
alias:          pci:v00008086d00001526sv*sd*bc*sc*i*
alias:          pci:v00008086d0000150Dsv*sd*bc*sc*i*
alias:          pci:v00008086d000010E7sv*sd*bc*sc*i*
alias:          pci:v00008086d000010E6sv*sd*bc*sc*i*
alias:          pci:v00008086d00001518sv*sd*bc*sc*i*
alias:          pci:v00008086d0000150Asv*sd*bc*sc*i*
alias:          pci:v00008086d000010C9sv*sd*bc*sc*i*
alias:          pci:v00008086d00000440sv*sd*bc*sc*i*
alias:          pci:v00008086d0000043Csv*sd*bc*sc*i*
alias:          pci:v00008086d0000043Asv*sd*bc*sc*i*
alias:          pci:v00008086d00000438sv*sd*bc*sc*i*
alias:          pci:v00008086d00001516sv*sd*bc*sc*i*
alias:          pci:v00008086d00001511sv*sd*bc*sc*i*
alias:          pci:v00008086d00001510sv*sd*bc*sc*i*
alias:          pci:v00008086d00001527sv*sd*bc*sc*i*
alias:          pci:v00008086d0000150Fsv*sd*bc*sc*i*
alias:          pci:v00008086d0000150Esv*sd*bc*sc*i*
alias:          pci:v00008086d00001524sv*sd*bc*sc*i*
alias:          pci:v00008086d00001523sv*sd*bc*sc*i*
alias:          pci:v00008086d00001522sv*sd*bc*sc*i*
alias:          pci:v00008086d00001521sv*sd*bc*sc*i*
alias:          pci:v00008086d00001539sv*sd*bc*sc*i*
alias:          pci:v00008086d0000157Csv*sd*bc*sc*i*
alias:          pci:v00008086d0000157Bsv*sd*bc*sc*i*
alias:          pci:v00008086d00001538sv*sd*bc*sc*i*
alias:          pci:v00008086d00001537sv*sd*bc*sc*i*
alias:          pci:v00008086d00001536sv*sd*bc*sc*i*
alias:          pci:v00008086d00001533sv*sd*bc*sc*i*
alias:          pci:v00008086d00001F45sv*sd*bc*sc*i*
alias:          pci:v00008086d00001F41sv*sd*bc*sc*i*
alias:          pci:v00008086d00001F40sv*sd*bc*sc*i*
depends:        dca
retpoline:      Y
name:           igb
vermagic:       4.19.0-999-generic SMP mod_unload 
parm:           InterruptThrottleRate:Maximum interrupts per second, per vector, (max 100000), default 3=adaptive (array of int)
parm:           IntMode:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default 2 (array of int)
parm:           Node:set the starting node to allocate memory on, default -1 (array of int)
parm:           LLIPort:Low Latency Interrupt TCP Port (0-65535), default 0=off (array of int)
parm:           LLIPush:Low Latency Interrupt on TCP Push flag (0,1), default 0=off (array of int)
parm:           LLISize:Low Latency Interrupt on Packet Size (0-1500), default 0=off (array of int)
parm:           RSS:Number of Receive-Side Scaling Descriptor Queues (0-8), default 1, 0=number of cpus (array of int)
parm:           VMDQ:Number of Virtual Machine Device Queues: 0-1 = disable, 2-8 enable, default 0 (array of int)
parm:           max_vfs:Number of Virtual Functions: 0 = disable, 1-7 enable, default 0 (array of int)
parm:           MDD:Malicious Driver Detection (0/1), default 1 = enabled. Only available when max_vfs is greater than 0 (array of int)
parm:           QueuePairs:Enable Tx/Rx queue pairs for interrupt handling (0,1), default 1=on (array of int)
parm:           EEE:Enable/disable on parts that support the feature (array of int)
parm:           DMAC:Disable or set latency for DMA Coalescing ((0=off, 1000-10000(msec), 250, 500 (usec)) (array of int)
parm:           LRO:Large Receive Offload (0,1), default 0=off (array of int)
parm:           debug:Debug level (0=none, ..., 16=all) (int)





M, KANNAN's picture

Is it possible to drop all packets except that the TCP Destination Port 80 ( other than http traffic )?   (drop all rather than port 80)

I am able to drop only the port 80 traffic. 

Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.