por James E Chamings,
Publicado:12/27/2016 Última atualização:12/27/2016
Download PDF [PDF 660 KB]
This tutorial describes how to set up a demonstration or test cluster for Open vSwitch (OvS) and Data Plane Development Kit (DPDK) to run together on OpenStack, using DevStack as the deployment tool and the Neutron ML2/GRE Tunnel plugin.
While the learnings presented here could be used to inform a production deployment with all of these pieces in play, actual production deployments are beyond the scope of this document.
The primary source for most of the details presented here are the documents provided in the following git repository:
https://github.com/openstack/networking-ovs-dpdk
The doc/source/getstarted/devstack/
directory at the root of this repository contains instructions for installing DevStack + OvS + DPDK on multiple operating systems. This tutorial uses the Ubuntu* instructions provided there and expands upon them to present an end-to-end deployment guide for that operating system. Many of the lessons learned in the creation of this document can be applied to the CentOS* and Fedora* instructions also provided in the repository.
Anyone using this tutorial should, at least, understand how to install and configure Linux*, especially for multi-homed networking across multiple network interfaces.
Knowledge of Open vSwitch and OpenStack is not necessarily required, but would be exceedingly helpful.
sudo apt-get install -y hwloc
Once your systems are set up with appropriate BIOS configuration, operating systems, and network configurations, you can begin preparing each node for the DevStack installation. Perform these actions on BOTH nodes.
stack ALL=(ALL) NOPASSWD: ALL
sudo apt-get install -y socat
git-proxy-wrapper
.#!/bin/sh
_proxy=<PROXY>
_proxyport=<PROXY PORT NUMBER>
exec socat STDIO SOCKS4:$_proxy:$1:$2,socksport=$_proxyport
(chmod +x git-proxy-wrapper)
and set the envariable GIT_PROXY_COMMAND=/home/stack/git-proxy-wrapper
(if the non-root user is 'stack'). You should add this export to your ~/.bashrc
to ensure it is available at all times (like other proxy variables). sudo apt-get install -y git
git clone https://github.com/openstack-dev/devstack.git
The systems are now prepared for DevStack installation.
In its default state, DevStack will make assumptions about the OpenStack services to install, and their configuration. We will create a local.conf file to change those assumptions, where pertinent, to ensure use of DPDK and OvS.
If you wish to clone the networking-ovs-dpdk repository (which is the first link in this article, in the Introduction) and use the sample files included in the repository, you will find them at doc/source/_downloads/local.conf
.*. However, this guide will present pared-down versions of these that are ready-made for this installation.
cd devstack
[[local|localrc]]
#HOST_IP_IFACE=<device name of NIC for public/API network, e.g. 'eth0'>
#Example:
#HOST_IP_IFACE=eth0
HOST_IP_IFACE=
#HOST_IP=<static IPv4 address of public/API network NIC, e.g. '192.168.10.1'>
#Example:
#HOST_IP=192.168.10.1
HOST_IP=
HOST_NAME=$(hostname)
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
ADMIN_PASSWORD=password
SERVICE_PASSWORD=password
HORIZON_PASSWORD=password
SERVICE_TOKEN=tokentoken
enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk master
OVS_DPDK_MODE=controller_ovs_dpdk
disable_service n-net
disable_service n-cpu
enable_service neutron
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
DEST=/opt/stack
SCREEN_LOGDIR=$DEST/logs/screen
LOGFILE=${SCREEN_LOGDIR}/xstack.sh.log
LOGDAYS=1
Q_ML2_TENANT_NETWORK_TYPE=gre
ENABLE_TENANT_VLANS=False
ENABLE_TENANT_TUNNELS=True
#OVS_TUNNEL_CIDR_MAPPING=br-<device name of NIC for private network, e.g. 'eth1'>:<CIDR of private NIC, e.g. 192.168.20.1/24>
#Example:
#OVS_TUNNEL_CIDR_MAPPING=br-eth1:192.168.20.1/24
OVS_TUNNEL_CIDR_MAPPING=
Q_ML2_PLUGIN_GRE_TYPE_OPTIONS=(tunnel_id_ranges=400:500)
OVS_NUM_HUGEPAGES=3072
OVS_DATAPATH_TYPE=netdev
OVS_LOG_DIR=/opt/stack/logs
#OVS_BRIDGE_MAPPINGS="default:br-<device name of NIC for private network, e.g. 'eth1'>"
#Example:
#OVS_BRIDGE_MAPPINGS="default:br-eth1"
OVS_BRIDGE_MAPPINGS=
MULTI_HOST=1
[[post-config|$NOVA_CONF]]
[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
novncproxy_host=0.0.0.0
novncproxy_port=6080
scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter
./stack.sh
./opt/stack/logs/screen/xstack.sh.log
will likely be the best source of useful debug information.The setup for the compute node is very similar to that of the controller, but the local.conf looks a little different. The same instructions apply here that you used above. Do not attempt compute node installation until your controller has installed successfully.
cd devstack
local.conf
file as above, but use the following text block and values:
[[local|localrc]]
#HOST_IP_IFACE=<device name of NIC for public/API network, e.g. 'eth0'>
#Example:
#HOST_IP_IFACE=eth0
HOST_IP_IFACE=
#HOST_IP=<static IPv4 address of public/API network NIC, e.g. '192.168.10.2'>
#Example:
#HOST_IP=192.168.10.1
HOST_IP=
HOST_NAME=$(hostname)
#SERVICE_HOST=<IP address of public NIC on controller>
#Example:
#SERVICE_HOST=192.168.10.1
SERVICE_HOST=
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOST=$SERVICE_HOST
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
MYSQL_PASSWORD=password
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
ADMIN_PASSWORD=password
SERVICE_PASSWORD=password
HORIZON_PASSWORD=password
SERVICE_TOKEN=tokentoken
enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk master
OVS_DPDK_MODE=compute
disable_all_services
enable_service n-cpu
enable_service q-agt
DEST=/opt/stack
SCREEN_LOGDIR=$DEST/logs/screen
LOGFILE=${SCREEN_LOGDIR}/xstack.sh.log
LOGDAYS=1
Q_ML2_TENANT_NETWORK_TYPE=gre
ENABLE_TENANT_VLANS=False
ENABLE_TENANT_TUNNELS=True
#OVS_TUNNEL_CIDR_MAPPING=br-<device name of NIC for private network, e.g. 'eth1'>:<CIDR of private NIC, e.g. 192.168.20.2/24>
#Example:
#OVS_TUNNEL_CIDR_MAPPING=br-eth1:192.168.20.2/24
OVS_TUNNEL_CIDR_MAPPING=
Q_ML2_PLUGIN_GRE_TYPE_OPTIONS=(tunnel_id_ranges=400:500)
OVS_NUM_HUGEPAGES=3072
MULTI_HOST=1
[[post-config|$NOVA_CONF]]
[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
vnc_enabled=True
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$HOST_IP
scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter
At this point, you should have a working OpenStack installation. You can reach the Horizon dashboard at the public IP address of the controller node, on port 80. But before launching an instance, we need to make at least one 'flavor' in the Nova service aware of DPDK, and specifically the use of hugepages, so that it can appropriately use the DPDK-enabled private interface.
cd devstack
nova flavor-key nova flavor-key m1.small set hw:mem_page_size=large
We need to make sure that ICMP and TCP traffic can reach any spawned VM.
Now we are ready to launch a test instance.
Eventually, you should see your instance launch become available. Make note of the private IP address given.
The VM is up and running with a private IP address assigned to it. You can connect to this private IP, but only if you are in the same network namespace as the virtual router that the VM is connected to.
These instructions will show how to enter the virtual namespace and access the VM.
sudo ip netns exec `ip netns | grep qrouter` /bin/bash
This completes a demonstration installation of DPDK with OpenVSwitch and Neutron in DevStack.
From here, you can examine the configurations that were generated by DevStack to learn how to apply those configurations in production instances of OpenStack. You will find the service configurations under the /opt/stack/
directory, and in their respective locations in /etc (e.g. /etc/nova/nova.conf)
. Of particular note for our purposes are /etc/neutron/neutron.conf
, which defines the use of the ML2 plugin by Neutron, and /etc/neutron/plugins/ml2_conf.ini
, which specifies how OpenVSwitch is to be configured and used by the Neutron agents.
For reference, here is the sample bridge structure that shows up on a lab system that was used to test this tutorial. This is from the compute node. On this system, ens786f3 is the private/data network interface designation. There are two running VMs; their interfaces can be seen on br-int.
$ sudo ovs-vsctl show
3c8cd45e-9285-45b2-b57f-5c4febd53e3f
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "int-br-ens786f3"
Interface "int-br-ens786f3"
type: patch
options: {peer="phy-br-ens786f3"}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Port "vhufb2e2855-70"
tag: 1
Interface "vhufb2e2855-70"
type: dpdkvhostuser
Port "vhu53d18db8-b5"
tag: 1
Interface "vhu53d18db8-b5"
type: dpdkvhostuser
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "gre-c0a81401"
Interface "gre-c0a81401"
type: gre
options: {df_default="true", in_key=flow, local_ip="192.168.20.2", out_key=flow, remote_ip="192.168.20.1"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge "br-ens786f3"
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "br-ens786f3"
Interface "br-ens786f3"
type: internal
Port "dpdk0"
Interface "dpdk0"
type: dpdk
Port "phy-br-ens786f3"
Interface "phy-br-ens786f3"
type: patch
options: {peer="int-br-ens786f3"}
This tutorial set up the ML2/GRE tunnel plugin to Neutron, since it is the most likely plugin to work without additional setup for a specific network buildout. It is also possible to use the ML2/VXLAN plugin or the ML2/VLAN plugin. Examples for each of these plugins are given in the local.conf files in the networking-ovs-dpdk repository mentioned above.
While it is beyond the scope of this tutorial to dive into multi-socket NUMA arrangements, it is important to understand that CPU pinning and PCIe will interact with DPDK and OvS, and sometimes to simply cause silent failures. Ensure that all of your CPU, memory allocations, and PCIe devices are within the same NUMA Node if you are having connectivity issues.
OVS_PMD_CORE_MASK is an option that can be added to local.conf to isolate DPDK's PMD threads to specific CPU cores. The default value of '0x4' means that CPU #3 (and its hyperthread pair, if HT is enabled) will be pinned to the PMD thread. If you are using multiple NUMA nodes in your system, you should work out the bitwise mask to assign one PMD thread/CPU per node. You will see these CPUs spike to 100% utilization once DPDK is enabled as they begin polling.
You can find other interesting and useful OVS DPDK settings and their default values in devstack/settings in the networking-ovs-dpdk repository.
Jim Chamings is a Sr. Software Engineer at Intel Corporation, who focuses on enabling cloud technology for Intel’s Developer Relations Division. He’d be happy to hear from you about this article at: jim.chamings@intel.com.
O desempenho varia de acordo com o uso, a configuração e outros fatores. Saiba mais em https://edc.intel.com/content/www/br/pt/products/performance/benchmarks/overview/.