From mboxrd@z Thu Jan 1 00:00:00 1970 From: Laine Stump Subject: Re: [libvirt] [Qemu-devel] [PATCH v7 0/4] Add Mediated device support Date: Fri, 2 Sep 2016 19:57:28 -0400 Message-ID: References: <1472097235-6332-1-git-send-email-kwankhede@nvidia.com> <20160830101638.49df467d@t450s.home> <78fedd65-6d62-e849-ff3b-d5105b2da816@redhat.com> <20160901105948.62f750aa@t450s.home> <98bbdbbf-c388-9120-3306-64f0cfb820a7@nvidia.com> <8682faeb-0331-f014-c13e-03c20f3f2bdf@redhat.com> <22097a95-21c6-3aec-f0ff-717181a705f8@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Paolo Bonzini , John Ferlan , Kirti Wankhede , "Song, Jike" , "cjia@nvidia.com" , "kvm@vger.kernel.org" , "Tian, Kevin" , "qemu-devel@nongnu.org" , "kraxel@redhat.com" , "bjsdjshi@linux.vnet.ibm.com" To: Michal Privoznik , Alex Williamson , "libvir-list@redhat.com" Return-path: Received: from mx1.redhat.com ([209.132.183.28]:53028 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751592AbcIBX5b (ORCPT ); Fri, 2 Sep 2016 19:57:31 -0400 In-Reply-To: <22097a95-21c6-3aec-f0ff-717181a705f8@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 09/02/2016 05:44 PM, Paolo Bonzini wrote: > > > On 02/09/2016 22:19, John Ferlan wrote: >> We don't have such a pool for GPU's (yet) - although I suppose they >> could just become a class of storage pools. >> >> The issue being nodedev device objects are not saved between reboots. >> They are generated on the fly. Hence the "create-nodedev' API - notice >> there's no "define-nodedev' API, although I suppose one could be >> created. It's just more work to get this all to work properly. > > It can all be made transient to begin with. The VM can be defined but > won't start unless the mdev(s) exist with the right UUIDs. > >>> After creating the vGPU, if required by the host driver, all the other >>> type ids would disappear from "virsh nodedev-dumpxml pci_0000_86_00_0" too. >> >> Not wanting to make assumptions, but this reads as if I create one type >> 11 vGPU, then I can create no others on the host. Maybe I'm reading it >> wrong - it's been a long week. > > Correct, at least for NVIDIA. > >> PCI devices have the "managed='yes|no'" attribute as well. That's what >> determines whether the device is to be detached from the host or not. >> That's been something very painful to manage for vfio and well libvirt! > > mdevs do not exist on the host (they do not have a driver on the host > because they are not PCI devices) so they do need any management. At > least I hope that's good news. :) What's your definition of "management"? They don't need the same type of management as a traditional hostdev, but they certainly don't just appear by magic! :-) For standard PCI devices, the managed attribute says whether or not the device needs to be detached from the host driver and attached to vfio-pci. For other kinds of hostdev devices, we could decide that it meant something different. In this case, perhaps managed='yes' could mean that the vGPU will be created as needed, and destroyed when the guest is finished with it, and managed='no' could mean that we expect a vGPU to already exist, and just need starting. Or not. Maybe that's a pointless distinction in this case. Just pointing out the option... From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49746) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bfyL4-0004gy-I6 for qemu-devel@nongnu.org; Fri, 02 Sep 2016 19:57:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bfyKy-00079k-Ld for qemu-devel@nongnu.org; Fri, 02 Sep 2016 19:57:37 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57202) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bfyKy-00079f-EO for qemu-devel@nongnu.org; Fri, 02 Sep 2016 19:57:32 -0400 References: <1472097235-6332-1-git-send-email-kwankhede@nvidia.com> <20160830101638.49df467d@t450s.home> <78fedd65-6d62-e849-ff3b-d5105b2da816@redhat.com> <20160901105948.62f750aa@t450s.home> <98bbdbbf-c388-9120-3306-64f0cfb820a7@nvidia.com> <8682faeb-0331-f014-c13e-03c20f3f2bdf@redhat.com> <22097a95-21c6-3aec-f0ff-717181a705f8@redhat.com> From: Laine Stump Message-ID: Date: Fri, 2 Sep 2016 19:57:28 -0400 MIME-Version: 1.0 In-Reply-To: <22097a95-21c6-3aec-f0ff-717181a705f8@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [libvirt] [PATCH v7 0/4] Add Mediated device support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Michal Privoznik , Alex Williamson , "libvir-list@redhat.com" Cc: Paolo Bonzini , John Ferlan , Kirti Wankhede , "Song, Jike" , "cjia@nvidia.com" , "kvm@vger.kernel.org" , "Tian, Kevin" , "qemu-devel@nongnu.org" , "kraxel@redhat.com" , "bjsdjshi@linux.vnet.ibm.com" On 09/02/2016 05:44 PM, Paolo Bonzini wrote: > > > On 02/09/2016 22:19, John Ferlan wrote: >> We don't have such a pool for GPU's (yet) - although I suppose they >> could just become a class of storage pools. >> >> The issue being nodedev device objects are not saved between reboots. >> They are generated on the fly. Hence the "create-nodedev' API - notice >> there's no "define-nodedev' API, although I suppose one could be >> created. It's just more work to get this all to work properly. > > It can all be made transient to begin with. The VM can be defined but > won't start unless the mdev(s) exist with the right UUIDs. > >>> After creating the vGPU, if required by the host driver, all the other >>> type ids would disappear from "virsh nodedev-dumpxml pci_0000_86_00_0" too. >> >> Not wanting to make assumptions, but this reads as if I create one type >> 11 vGPU, then I can create no others on the host. Maybe I'm reading it >> wrong - it's been a long week. > > Correct, at least for NVIDIA. > >> PCI devices have the "managed='yes|no'" attribute as well. That's what >> determines whether the device is to be detached from the host or not. >> That's been something very painful to manage for vfio and well libvirt! > > mdevs do not exist on the host (they do not have a driver on the host > because they are not PCI devices) so they do need any management. At > least I hope that's good news. :) What's your definition of "management"? They don't need the same type of management as a traditional hostdev, but they certainly don't just appear by magic! :-) For standard PCI devices, the managed attribute says whether or not the device needs to be detached from the host driver and attached to vfio-pci. For other kinds of hostdev devices, we could decide that it meant something different. In this case, perhaps managed='yes' could mean that the vGPU will be created as needed, and destroyed when the guest is finished with it, and managed='no' could mean that we expect a vGPU to already exist, and just need starting. Or not. Maybe that's a pointless distinction in this case. Just pointing out the option...