Skip to content

Commit 0419393

Browse files
committed
enable blocked VDPA move operations
This change adds functional test for operations on servers with VDPA devices that are expected to work but currently blocked due to lack of testing or qemu bugs. cold-migrate, resize, evacuate,and shelve are enabled and tested by this patch Conflicts: nova/tests/functional/libvirt/test_pci_sriov_servers.py Closes-Bug: #1970467 Change-Id: I6e220cf3231670d156632e075fcf7701df744773 (cherry picked from commit 95f96ed)
1 parent f98858a commit 0419393

File tree

5 files changed

+385
-26
lines changed

5 files changed

+385
-26
lines changed

doc/source/admin/index.rst

+1
Original file line numberDiff line numberDiff line change
@@ -198,6 +198,7 @@ instance for these kind of workloads.
198198
virtual-gpu
199199
file-backed-memory
200200
ports-with-resource-requests
201+
vdpa
201202
virtual-persistent-memory
202203
emulated-tpm
203204
uefi

doc/source/admin/vdpa.rst

+92
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
============================
2+
Using ports vnic_type='vdpa'
3+
============================
4+
.. versionadded:: 23.0.0 (Wallaby)
5+
6+
Introduced support for vDPA.
7+
8+
.. important::
9+
The functionality described below is only supported by the
10+
libvirt/KVM virt driver.
11+
12+
The kernel vDPA (virtio Data Path Acceleration) framework
13+
provides a vendor independent framework for offloading data-plane
14+
processing to software or hardware virtio device backends.
15+
While the kernel vDPA framework supports many types of vDPA devices,
16+
at this time nova only support ``virtio-net`` devices
17+
using the ``vhost-vdpa`` front-end driver. Support for ``virtio-blk`` or
18+
``virtio-gpu`` may be added in the future but is not currently planned
19+
for any specific release.
20+
21+
vDPA device tracking
22+
~~~~~~~~~~~~~~~~~~~~
23+
When implementing support for vDPA based neutron ports one of the first
24+
decisions nova had to make was how to model the availability of vDPA devices
25+
and the capability to virtualize vDPA devices. As the initial use-case
26+
for this technology was to offload networking to hardware offload OVS via
27+
neutron ports the decision was made to extend the existing PCI tracker that
28+
is used for SR-IOV and pci-passthrough to support vDPA devices. As a result
29+
a simplification was made to assume that the parent device of a vDPA device
30+
is an SR-IOV Virtual Function (VF). As a result software only vDPA device such
31+
as those created by the kernel ``vdpa-sim`` sample module are not supported.
32+
33+
To make vDPA device available to be scheduled to guests the operator should
34+
include the device using the PCI address or vendor ID and product ID of the
35+
parent VF in the PCI ``device_spec``.
36+
See: :nova-doc:`pci-passthrough <admin/pci-passthrough>` for details.
37+
38+
Nova will not create the VFs or vDPA devices automatically. It is expected
39+
that the operator will allocate them before starting the nova-compute agent.
40+
While no specific mechanisms is prescribed to do this udev rules or systemd
41+
service files are generally the recommended approach to ensure the devices
42+
are created consistently across reboots.
43+
44+
.. note::
45+
As vDPA is an offload only for the data plane and not the control plane a
46+
vDPA control plane is required to properly support vDPA device passthrough.
47+
At the time of writing only hardware offloaded OVS is supported when using
48+
vDPA with nova. Because of this vDPA devices cannot be requested using the
49+
PCI alias. While nova could allow vDPA devices to be requested by the
50+
flavor using a PCI alias we would not be able to correctly configure the
51+
device as there would be no suitable control plane. For this reason vDPA
52+
devices are currently only consumable via neutron ports.
53+
54+
Virt driver support
55+
~~~~~~~~~~~~~~~~~~~
56+
57+
Supporting neutron ports with ``vnic_type=vdpa`` depends on the capability
58+
of the virt driver. At this time only the ``libvirt`` virt driver with KVM
59+
is fully supported. QEMU may also work but is untested.
60+
61+
vDPA support depends on kernel 5.7+, Libvirt 6.9.0+ and QEMU 5.1+.
62+
63+
vDPA lifecycle operations
64+
~~~~~~~~~~~~~~~~~~~~~~~~~
65+
66+
At this time vDPA ports can only be added to a VM when it is first created.
67+
To do this the normal SR-IOV workflow is used where by the port is first created
68+
in neutron and passed into nova as part of the server create request.
69+
70+
.. code-block:: bash
71+
72+
openstack port create --network <my network> --vnic-type vdpa vdpa-port
73+
openstack server create --flavor <my-flavor> --image <my-image> --port <vdpa-port uuid> vdpa-vm
74+
75+
When vDPA support was first introduced no move operations were supported.
76+
As this documentation was added in the change that enabled some move operations
77+
The following should be interpreted both as a retrospective and future looking
78+
viewpoint and treated as a living document which will be updated as functionality evolves.
79+
80+
23.0.0: initial support is added for creating a VM with vDPA ports, move operations
81+
are blocked in the API but implemented in code.
82+
26.0.0: support for all move operation except live migration is tested and api blocks are removed.
83+
25.x.y: (planned) api block removal backported to stable/Yoga
84+
24.x.y: (planned) api block removal backported to stable/Xena
85+
23.x.y: (planned) api block removal backported to stable/wallaby
86+
26.0.0: (in progress) interface attach/detach, suspend/resume and hot plug live migration
87+
are implemented to fully support all lifecycle operations on instances with vDPA ports.
88+
89+
.. note::
90+
The ``(planned)`` and ``(in progress)`` qualifiers will be removed when those items are
91+
completed. If your current version of the document contains those qualifiers then those
92+
lifecycle operations are unsupported.

nova/compute/api.py

-8
Original file line numberDiff line numberDiff line change
@@ -4096,9 +4096,6 @@ def _validate_host_for_cold_migrate(
40964096
# finally split resize and cold migration into separate code paths
40974097
@block_extended_resource_request
40984098
@block_port_accelerators()
4099-
# FIXME(sean-k-mooney): Cold migrate and resize to different hosts
4100-
# probably works but they have not been tested so block them for now
4101-
@reject_vdpa_instances(instance_actions.RESIZE)
41024099
@block_accelerators()
41034100
@check_instance_lock
41044101
@check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED])
@@ -4324,10 +4321,7 @@ def _allow_resize_to_same_host(self, cold_migrate, instance):
43244321
allow_same_host = CONF.allow_resize_to_same_host
43254322
return allow_same_host
43264323

4327-
# FIXME(sean-k-mooney): Shelve works but unshelve does not due to bug
4328-
# #1851545, so block it for now
43294324
@block_port_accelerators()
4330-
@reject_vdpa_instances(instance_actions.SHELVE)
43314325
@reject_vtpm_instances(instance_actions.SHELVE)
43324326
@block_accelerators(until_service=54)
43334327
@check_instance_lock
@@ -5469,8 +5463,6 @@ def live_migrate_abort(self, context, instance, migration_id,
54695463

54705464
@block_extended_resource_request
54715465
@block_port_accelerators()
5472-
# FIXME(sean-k-mooney): rebuild works but we have not tested evacuate yet
5473-
@reject_vdpa_instances(instance_actions.EVACUATE)
54745466
@reject_vtpm_instances(instance_actions.EVACUATE)
54755467
@block_accelerators(until_service=SUPPORT_ACCELERATOR_SERVICE_FOR_REBUILD)
54765468
@check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED,

0 commit comments

Comments
 (0)