From mboxrd@z Thu Jan 1 00:00:00 1970 From: Kirti Wankhede Subject: Re: [Qemu-devel] [PATCH v7 0/4] Add Mediated device support Date: Sat, 3 Sep 2016 00:03:55 +0530 Message-ID: References: <1472097235-6332-1-git-send-email-kwankhede@nvidia.com> <20160830101638.49df467d@t450s.home> <78fedd65-6d62-e849-ff3b-d5105b2da816@redhat.com> <20160901105948.62f750aa@t450s.home> <98bbdbbf-c388-9120-3306-64f0cfb820a7@nvidia.com> <8682faeb-0331-f014-c13e-03c20f3f2bdf@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Cc: "Song, Jike" , "cjia@nvidia.com" , "kvm@vger.kernel.org" , "libvir-list@redhat.com" , "Tian, Kevin" , "qemu-devel@nongnu.org" , "kraxel@redhat.com" , Laine Stump , "bjsdjshi@linux.vnet.ibm.com" To: Paolo Bonzini , Michal Privoznik , Alex Williamson Return-path: Received: from hqemgate16.nvidia.com ([216.228.121.65]:7091 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751680AbcIBSeP (ORCPT ); Fri, 2 Sep 2016 14:34:15 -0400 In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On 9/2/2016 10:55 PM, Paolo Bonzini wrote: > > > On 02/09/2016 19:15, Kirti Wankhede wrote: >> On 9/2/2016 3:35 PM, Paolo Bonzini wrote: >>> >>> my-vgpu >>> pci_0000_86_00_0 >>> >>> >>> 0695d332-7831-493f-9e71-1c85c8911a08 >>> >>> >>> >>> After creating the vGPU, if required by the host driver, all the other >>> type ids would disappear from "virsh nodedev-dumpxml pci_0000_86_00_0" too. >> >> Thanks Paolo for details. >> 'nodedev-create' parse the xml file and accordingly write to 'create' >> file in sysfs to create mdev device. Right? >> At this moment, does libvirt know which VM this device would be >> associated with? > > No, the VM will associate to the nodedev through the UUID. The nodedev > is created separately from the VM. > >>> When dumping the mdev with nodedev-dumpxml, it could show more complete >>> info, again taken from sysfs: >>> >>> >>> my-vgpu >>> pci_0000_86_00_0 >>> >>> 0695d332-7831-493f-9e71-1c85c8911a08 >>> >>> >>> >>> >>> >>> >>> >>> ... >>> NVIDIA >>> >>> >>> >>> >>> Notice how the parent has mdev inside pci; the vGPU, if it has to have >>> pci at all, would have it inside mdev. This represents the difference >>> between the mdev provider and the mdev device. >> >> Parent of mdev device might not always be a PCI device. I think we >> shouldn't consider it as PCI capability. > > The in the vGPU means that it _will_ be exposed > as a PCI device by VFIO. > > The in the physical GPU means that the GPU is a > PCI device. > Ok. Got that. >>> Random proposal for the domain XML too: >>> >>> >>> >>> >>> 0695d332-7831-493f-9e71-1c85c8911a08 >>> >>>
>>> >>> >> >> When user wants to assign two mdev devices to one VM, user have to add >> such two entries or group the two devices in one entry? > > Two entries, one per UUID, each with its own PCI address in the guest. > >> On other mail thread with same subject we are thinking of creating group >> of mdev devices to assign multiple mdev devices to one VM. > > What is the advantage in managing mdev groups? (Sorry didn't follow the > other thread). > When mdev device is created, resources from physical device is assigned to this device. But resources are committed only when device goes 'online' ('start' in v6 patch) In case of multiple vGPUs in a VM for Nvidia vGPU solution, resources for all vGPU devices in a VM are committed at one place. So we need to know the vGPUs assigned to a VM before QEMU starts. Grouping would help here as Alex suggested in that mail. Pulling only that part of discussion here: It seems then that the grouping needs to affect the iommu group so that > you know that there's only a single owner for all the mdev devices > within the group. IIRC, the bus drivers don't have any visibility > to opening and releasing of the group itself to trigger the > online/offline, but they can track opening of the device file > descriptors within the group. Within the VFIO API the user cannot > access the device without the device file descriptor, so a "first > device opened" and "last device closed" trigger would provide the > trigger points you need. Some sort of new sysfs interface would need > to be invented to allow this sort of manipulation. > Also we should probably keep sight of whether we feel this is > sufficiently necessary for the complexity. If we can get by with only > doing this grouping at creation time then we could define the "create" > interface in various ways. For example: > > echo $UUID0 > create > > would create a single mdev named $UUID0 in it's own group. > > echo {$UUID0,$UUID1} > create > > could create mdev devices $UUID0 and $UUID1 grouped together. > I think this would create mdev device of same type on same parent device. We need to consider the case of multiple mdev devices of different types and with different parents to be grouped together. We could even do: > > echo $UUID1:$GROUPA > create > > where $GROUPA is the group ID of a previously created mdev device into > which $UUID1 is to be created and added to the same group. I was thinking about: echo $UUID0 > create would create mdev device echo $UUID0 > /sys/class/mdev/create_group would add created device to group. For multiple devices case: echo $UUID0 > create echo $UUID1 > create would create mdev devices which could be of different types and different parents. echo $UUID0, $UUID1 > /sys/class/mdev/create_group would add devices in a group. Mdev core module would create a new group with unique number. On mdev device 'destroy' that mdev device would be removed from the group. When there are no devices left in the group, group would be deleted. With this "first device opened" and "last device closed" trigger can be used to commit resources. Then libvirt use mdev device path to pass as argument to QEMU, same as it does for VFIO. Libvirt don't have to care about group number. Thanks, Kirti