git.net

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Neutron] NUMA aware VxLAN


On Tue, 2019-07-09 at 00:17 +0000, Fox, Kevin M wrote:
> I'm curious. A lot of network cards support offloaded vxlan traffic these days so the processor isn't doing much work.
> Is this issue really a problem?
the issue is not really with the cpu overhead of tunnel decapsulation it is with the cross numa latency incurred
when crossing the qpi bus so even with hardware accleration on the nic there is a perfomace degradation if you go across
a numa nodes. the optimal solution is to have multiple nics, one per numa node and bond them then affinities the vms mac
to the numa local bond peer such that no traffic has to travers the numa node but that is non trivaial to do and is not
supported by openstack natively. you would have to have an agent of or cron job that actuly a did the tuning after a vm
is spawned but it could be an interesting experiment if someone wanted to code it up.


> 
> Thanks,
> Kevin
> ________________________________
> From: Guo, Ruijing [ruijing.guo at intel.com]
> Sent: Sunday, July 07, 2019 5:47 PM
> To: openstack-dev at lists.openstack.org; openstack at lists.openstack.org
> Subject: [Neutron] NUMA aware VxLAN
> 
> Hi,
> 
> Existing neutron ML2 support one VxLAN for tenant network. In NUMA case, VM 0 can be created in node 0 and VM 1 can be
> created in node 1 and VxLAN is in node 0.
> 
> VM1 need to cross node, which cause some performance downgrade. Does someone have this performance issue? Does Neutron
> community have plan to enhance it?
nova has a spec called numa aware vswitchs

https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/numa-aware-vswitches.html
that allow you to declare teh numa affinity of tunnels and physnets on a host.

this will not allow you to have multiple tunnel enpoint ips but it will allow you force instnace
with a numa toploty  to be colocated on the same numa node as the network backend.

> Thanks,
> -Ruijing