From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jike Song Subject: Re: [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support] Date: Thu, 2 Jun 2016 10:11:36 +0800 Message-ID: <87c792ed-8bd6-0e3d-eb47-cdf09258e287@intel.com> References: <1464119897-10844-1-git-send-email-kwankhede@nvidia.com> <20160525074356.52121ab8@ul30vt.home> <20160527085443.27f937eb@t450s.home> <20160528085630.0fb79cc7@ul30vt.home> <7195005e-6461-25fb-9ed9-ec5906b93bec@intel.com> <20160531082926.653ada83@ul30vt.home> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: "Tian, Kevin" , Kirti Wankhede , "pbonzini@redhat.com" , "kraxel@redhat.com" , "cjia@nvidia.com" , "qemu-devel@nongnu.org" , "kvm@vger.kernel.org" , "Ruan, Shuai" , "Lv, Zhiyuan" , "bjsdjshi@linux.vnet.ibm.com" To: Alex Williamson Return-path: Received: from mga04.intel.com ([192.55.52.120]:56721 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751923AbcFBCLz (ORCPT ); Wed, 1 Jun 2016 22:11:55 -0400 In-Reply-To: <20160531082926.653ada83@ul30vt.home> Sender: kvm-owner@vger.kernel.org List-ID: On 05/31/2016 10:29 PM, Alex Williamson wrote: > On Tue, 31 May 2016 10:29:10 +0800 > Jike Song wrote: > >> On 05/28/2016 10:56 PM, Alex Williamson wrote: >>> On Fri, 27 May 2016 22:43:54 +0000 >>> "Tian, Kevin" wrote: >>> >>>> >>>> My impression was that you don't like hypervisor specific thing in VFIO, >>>> which makes it a bit tricky to accomplish those tasks in kernel. If we >>>> can add Xen specific logic directly in VFIO (like vfio-iommu-xen you >>>> mentioned), the whole thing would be easier. >>> >>> If vfio is hosted in dom0, then Xen is the platform and we need to >>> interact with the hypervisor to manage the iommu. That said, there are >>> aspects of vfio that do not seem to map well to a hypervisor managed >>> iommu or a Xen-like hypervisor. For instance, how does dom0 manage >>> iommu groups and what's the distinction of using vfio to manage a >>> userspace driver in dom0 versus managing a device for another domain. >>> In the case of kvm, vfio has no dependency on kvm, there is some minor >>> interaction, but we're not running on kvm and it's not appropriate to >>> use vfio as a gateway to interact with a hypervisor that may or may not >>> exist. Thanks, >> >> Hi Alex, >> >> Beyond iommu, there are other aspects vfio need to interact with Xen? >> e.g. to pass-through MMIO, one have to call hypercalls to establish EPT >> mappings. > > If it's part of running on a Xen platform and not trying to interact > with a VM in ways that are out of scope for vfio, I might be open to > it, I'd need to see a proposal. This also goes back to my question of > how does vfio know whether it's configuring a device for a guest driver > or a guest VM, with kvm these are one and the same. Thanks, Yes, this brings us back to Kevin suggestion, > I'm not sure whether VFIO can support this usage today. It is somehow > similar to channel io passthru in s390, where we also rely on Qemu to > mediate ccw commands to ensure isolation. Maybe just some slight > extension is required (e.g. not assume some API must be invoked). Of > course Qemu side vfio code also need some change. If this can work, > at least we can first put it as the enumeration interface for mediated > device in Xen. In the future it may be extended to cover normal Xen > PCI assignment as well instead of using sysfs to read PCI resource > today. > > If above works, then we have a sound plan to enable mediated devices > based on VFIO first for KVM, and then extend to Xen with reasonable > effort. We'll work on the proposal, thanks! -- Thanks, Jike From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35253) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8I76-0000Vs-ED for qemu-devel@nongnu.org; Wed, 01 Jun 2016 22:12:01 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b8I72-0007Zu-4W for qemu-devel@nongnu.org; Wed, 01 Jun 2016 22:11:59 -0400 Received: from mga14.intel.com ([192.55.52.115]:2615) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8I71-0007Zb-R4 for qemu-devel@nongnu.org; Wed, 01 Jun 2016 22:11:56 -0400 References: <1464119897-10844-1-git-send-email-kwankhede@nvidia.com> <20160525074356.52121ab8@ul30vt.home> <20160527085443.27f937eb@t450s.home> <20160528085630.0fb79cc7@ul30vt.home> <7195005e-6461-25fb-9ed9-ec5906b93bec@intel.com> <20160531082926.653ada83@ul30vt.home> From: Jike Song Message-ID: <87c792ed-8bd6-0e3d-eb47-cdf09258e287@intel.com> Date: Thu, 2 Jun 2016 10:11:36 +0800 MIME-Version: 1.0 In-Reply-To: <20160531082926.653ada83@ul30vt.home> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support] List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alex Williamson Cc: "Tian, Kevin" , Kirti Wankhede , "pbonzini@redhat.com" , "kraxel@redhat.com" , "cjia@nvidia.com" , "qemu-devel@nongnu.org" , "kvm@vger.kernel.org" , "Ruan, Shuai" , "Lv, Zhiyuan" , "bjsdjshi@linux.vnet.ibm.com" On 05/31/2016 10:29 PM, Alex Williamson wrote: > On Tue, 31 May 2016 10:29:10 +0800 > Jike Song wrote: > >> On 05/28/2016 10:56 PM, Alex Williamson wrote: >>> On Fri, 27 May 2016 22:43:54 +0000 >>> "Tian, Kevin" wrote: >>> >>>> >>>> My impression was that you don't like hypervisor specific thing in VFIO, >>>> which makes it a bit tricky to accomplish those tasks in kernel. If we >>>> can add Xen specific logic directly in VFIO (like vfio-iommu-xen you >>>> mentioned), the whole thing would be easier. >>> >>> If vfio is hosted in dom0, then Xen is the platform and we need to >>> interact with the hypervisor to manage the iommu. That said, there are >>> aspects of vfio that do not seem to map well to a hypervisor managed >>> iommu or a Xen-like hypervisor. For instance, how does dom0 manage >>> iommu groups and what's the distinction of using vfio to manage a >>> userspace driver in dom0 versus managing a device for another domain. >>> In the case of kvm, vfio has no dependency on kvm, there is some minor >>> interaction, but we're not running on kvm and it's not appropriate to >>> use vfio as a gateway to interact with a hypervisor that may or may not >>> exist. Thanks, >> >> Hi Alex, >> >> Beyond iommu, there are other aspects vfio need to interact with Xen? >> e.g. to pass-through MMIO, one have to call hypercalls to establish EPT >> mappings. > > If it's part of running on a Xen platform and not trying to interact > with a VM in ways that are out of scope for vfio, I might be open to > it, I'd need to see a proposal. This also goes back to my question of > how does vfio know whether it's configuring a device for a guest driver > or a guest VM, with kvm these are one and the same. Thanks, Yes, this brings us back to Kevin suggestion, > I'm not sure whether VFIO can support this usage today. It is somehow > similar to channel io passthru in s390, where we also rely on Qemu to > mediate ccw commands to ensure isolation. Maybe just some slight > extension is required (e.g. not assume some API must be invoked). Of > course Qemu side vfio code also need some change. If this can work, > at least we can first put it as the enumeration interface for mediated > device in Xen. In the future it may be extended to cover normal Xen > PCI assignment as well instead of using sysfs to read PCI resource > today. > > If above works, then we have a sound plan to enable mediated devices > based on VFIO first for KVM, and then extend to Xen with reasonable > effort. We'll work on the proposal, thanks! -- Thanks, Jike