|
| 1 | +============================ |
| 2 | +Using ports vnic_type='vdpa' |
| 3 | +============================ |
| 4 | +.. versionadded:: 23.0.0 (Wallaby) |
| 5 | + |
| 6 | + Introduced support for vDPA. |
| 7 | + |
| 8 | +.. important:: |
| 9 | + The functionality described below is only supported by the |
| 10 | + libvirt/KVM virt driver. |
| 11 | + |
| 12 | +The kernel vDPA (virtio Data Path Acceleration) framework |
| 13 | +provides a vendor independent framework for offloading data-plane |
| 14 | +processing to software or hardware virtio device backends. |
| 15 | +While the kernel vDPA framework supports many types of vDPA devices, |
| 16 | +at this time nova only support ``virtio-net`` devices |
| 17 | +using the ``vhost-vdpa`` front-end driver. Support for ``virtio-blk`` or |
| 18 | +``virtio-gpu`` may be added in the future but is not currently planned |
| 19 | +for any specific release. |
| 20 | + |
| 21 | +vDPA device tracking |
| 22 | +~~~~~~~~~~~~~~~~~~~~ |
| 23 | +When implementing support for vDPA based neutron ports one of the first |
| 24 | +decisions nova had to make was how to model the availability of vDPA devices |
| 25 | +and the capability to virtualize vDPA devices. As the initial use-case |
| 26 | +for this technology was to offload networking to hardware offload OVS via |
| 27 | +neutron ports the decision was made to extend the existing PCI tracker that |
| 28 | +is used for SR-IOV and pci-passthrough to support vDPA devices. As a result |
| 29 | +a simplification was made to assume that the parent device of a vDPA device |
| 30 | +is an SR-IOV Virtual Function (VF). As a result software only vDPA device such |
| 31 | +as those created by the kernel ``vdpa-sim`` sample module are not supported. |
| 32 | + |
| 33 | +To make vDPA device available to be scheduled to guests the operator should |
| 34 | +include the device using the PCI address or vendor ID and product ID of the |
| 35 | +parent VF in the PCI ``device_spec``. |
| 36 | +See: :nova-doc:`pci-passthrough <admin/pci-passthrough>` for details. |
| 37 | + |
| 38 | +Nova will not create the VFs or vDPA devices automatically. It is expected |
| 39 | +that the operator will allocate them before starting the nova-compute agent. |
| 40 | +While no specific mechanisms is prescribed to do this udev rules or systemd |
| 41 | +service files are generally the recommended approach to ensure the devices |
| 42 | +are created consistently across reboots. |
| 43 | + |
| 44 | +.. note:: |
| 45 | + As vDPA is an offload only for the data plane and not the control plane a |
| 46 | + vDPA control plane is required to properly support vDPA device passthrough. |
| 47 | + At the time of writing only hardware offloaded OVS is supported when using |
| 48 | + vDPA with nova. Because of this vDPA devices cannot be requested using the |
| 49 | + PCI alias. While nova could allow vDPA devices to be requested by the |
| 50 | + flavor using a PCI alias we would not be able to correctly configure the |
| 51 | + device as there would be no suitable control plane. For this reason vDPA |
| 52 | + devices are currently only consumable via neutron ports. |
| 53 | + |
| 54 | +Virt driver support |
| 55 | +~~~~~~~~~~~~~~~~~~~ |
| 56 | + |
| 57 | +Supporting neutron ports with ``vnic_type=vdpa`` depends on the capability |
| 58 | +of the virt driver. At this time only the ``libvirt`` virt driver with KVM |
| 59 | +is fully supported. QEMU may also work but is untested. |
| 60 | + |
| 61 | +vDPA support depends on kernel 5.7+, Libvirt 6.9.0+ and QEMU 5.1+. |
| 62 | + |
| 63 | +vDPA lifecycle operations |
| 64 | +~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 65 | + |
| 66 | +At this time vDPA ports can only be added to a VM when it is first created. |
| 67 | +To do this the normal SR-IOV workflow is used where by the port is first created |
| 68 | +in neutron and passed into nova as part of the server create request. |
| 69 | + |
| 70 | +.. code-block:: bash |
| 71 | +
|
| 72 | + openstack port create --network <my network> --vnic-type vdpa vdpa-port |
| 73 | + openstack server create --flavor <my-flavor> --image <my-image> --port <vdpa-port uuid> vdpa-vm |
| 74 | +
|
| 75 | +When vDPA support was first introduced no move operations were supported. |
| 76 | +As this documentation was added in the change that enabled some move operations |
| 77 | +The following should be interpreted both as a retrospective and future looking |
| 78 | +viewpoint and treated as a living document which will be updated as functionality evolves. |
| 79 | + |
| 80 | +23.0.0: initial support is added for creating a VM with vDPA ports, move operations |
| 81 | +are blocked in the API but implemented in code. |
| 82 | +26.0.0: support for all move operation except live migration is tested and api blocks are removed. |
| 83 | +25.x.y: (planned) api block removal backported to stable/Yoga |
| 84 | +24.x.y: (planned) api block removal backported to stable/Xena |
| 85 | +23.x.y: (planned) api block removal backported to stable/wallaby |
| 86 | +26.0.0: (in progress) interface attach/detach, suspend/resume and hot plug live migration |
| 87 | +are implemented to fully support all lifecycle operations on instances with vDPA ports. |
| 88 | + |
| 89 | +.. note:: |
| 90 | + The ``(planned)`` and ``(in progress)`` qualifiers will be removed when those items are |
| 91 | + completed. If your current version of the document contains those qualifiers then those |
| 92 | + lifecycle operations are unsupported. |
0 commit comments