[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Fwd: DPDK+OVS with OpenStack

On Sat, 2020-10-10 at 18:25 -0400, Mark Wittling wrote:
> Looking for someone who knows OpenStack with OpenVSwitch, and in
> addition
> to that, DPDK with OpenStack and OVS.I am using OpenStack Queens,
> with
> OpenVSwitch. The architecture I am using is documented here:
> OVS I am using on the Compute Node, is compiled with DPDK, and I have
> enabled the datapath to netdev (DPDK) on br-prv (provider network
> bridge),
> and br-tun (tunneling bridge). But these two bridges, br-tun and br-
> prv,
> are patched into another OpenStack bridge, called br-int. I wasnâ??t
> actually
> sure about whether to tinker with this bridge, and wondered what
> datapath
> it was using.Then, I realized there is a parameter in the
> openvswitch_agent.ini file, which I will list here:
> # OVS datapath to use. 'system' is the default value and corresponds
> to the
> # kernel datapath. To enable the userspace datapath set this value to
> 'netdev'.
> # (string value)
> # Possible values:
> # system - <No description provided>
> # netdev - <No description provided>
> #datapath_type = system
> datapath_type = netdev
> So in tinkering with this, what I realized, is that when you set this
> datapath_type to system or netdev, it will adjust the br-int bridge
> to that
> datapath type.So here is my question. How can I launch a non-DPDK VM,
> if
> all of the bridges are using the netdev datapath type?

you cant we intentionally do not support miking the kernel datapath and
dpdk datapath on the same host.

incidentally patch port only function between bridge of the same data
path type. so the br-int, br-tun and br-prv shoudl all be set to
netdev. also if you wnat dpdk to process your tunnel trafic you need to
assign the tunnel local endpoint ip to the br-prv assuming that is
where the dpdk physical interface is.

if you do not do this tunnel traffic e.g. vxlan will be processed by
the ovs main thread without any dpdk or kernel acceleration.

> Here is another
> question. What if one of the flavors donâ??t have the largepages
> property set
> on them?
they  will not get network connectivity.

vhost-user requires mmapped=shared memory with an open file decriptor
that is pre mapped and contiguious.

in nova you can only get this one of two ways.
1 use hugpages or 2 use file backed memory.

the second approch while it shoudl work has never acutlly been tested
in nova with ovs-dpdk but it was added to libvirt for ovs-dpdk without
hugepages and was added to nova for security tools that scan vm memory
externally looking for active viruses and other threats.

>  I assumed OpenStack would revert to a system datapath and not use
> DPDK for those VM interfaces.
no that would break the operation of all patch ports so it cant.

>  Well, I found out in testing, that is not the
> case. If you set all your bridges up for netdev, and you donâ??t set
> the
> property on the Flavor of the VM (largepages), the VM will launch,
> but it
> simply wonâ??t work.

yes without file backed memroy that is created as i said above dpdk
will not be able  to map the virtio rings form the guest into its
process space to tx/rx packets.
> Is there no way, on a specific Compute Host, to support
> both DPDK (netdev datapaths) and non-DPDK (system datapaths)?Either
> on a VM
> interface level (VM has one interface that is netdev DPDK and another
> that
> is system datapath non-DPDK)?

correct there is no way to support both on the same host with
openstack.until relitivly recently it was not supported by the ovs
comunity either. i could be configured but it was not tested, supported
or recommented by the ovs comunity and its is not supported with

> Or on a VM by VM basis (VM 1 has 1 or more
> netdev datapath interfaces and VM 2 has 1 or more system datapath
> interfaces)?Am I right here? Once you set up a Compute Host for DPDK,
> itâ??s
> DPDK or nothing on that Compute Host? (edited)
yes you can mix in the same cloud just on on the same host.
for example if you are not using dvr we generally recommend dpdk on
only the compute host and using kernel ovs on the contoller/networking
nodes where the l3 agents are running.