From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934557AbbKSUCl (ORCPT ); Thu, 19 Nov 2015 15:02:41 -0500 Received: from mx1.redhat.com ([209.132.183.28]:53118 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752714AbbKSUCj (ORCPT ); Thu, 19 Nov 2015 15:02:39 -0500 Message-ID: <1447963356.4697.184.camel@redhat.com> Subject: Re: [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel From: Alex Williamson To: "Tian, Kevin" Cc: "Song, Jike" , "xen-devel@lists.xen.org" , "igvt-g@ml01.01.org" , "intel-gfx@lists.freedesktop.org" , "linux-kernel@vger.kernel.org" , "White, Michael L" , "Dong, Eddie" , "Li, Susie" , "Cowperthwaite, David J" , "Reddy, Raghuveer" , "Zhu, Libo" , "Zhou, Chao" , "Wang, Hongbo" , "Lv, Zhiyuan" , qemu-devel , Paolo Bonzini , Gerd Hoffmann Date: Thu, 19 Nov 2015 13:02:36 -0700 In-Reply-To: References: <53D215D3.50608@intel.com> <547FCAAD.2060406@intel.com> <54AF967B.3060503@intel.com> <5527CEC4.9080700@intel.com> <559B3E38.1080707@intel.com> <562F4311.9@intel.com> <1447870341.4697.92.camel@redhat.com> Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Kevin, On Thu, 2015-11-19 at 04:06 +0000, Tian, Kevin wrote: > > From: Alex Williamson [mailto:alex.williamson@redhat.com] > > Sent: Thursday, November 19, 2015 2:12 AM > > > > [cc +qemu-devel, +paolo, +gerd] > > > > On Tue, 2015-10-27 at 17:25 +0800, Jike Song wrote: > > > Hi all, > > > > > > We are pleased to announce another update of Intel GVT-g for Xen. > > > > > > Intel GVT-g is a full GPU virtualization solution with mediated > > > pass-through, starting from 4th generation Intel Core(TM) processors > > > with Intel Graphics processors. A virtual GPU instance is maintained > > > for each VM, with part of performance critical resources directly > > > assigned. The capability of running native graphics driver inside a > > > VM, without hypervisor intervention in performance critical paths, > > > achieves a good balance among performance, feature, and sharing > > > capability. Xen is currently supported on Intel Processor Graphics > > > (a.k.a. XenGT); and the core logic can be easily ported to other > > > hypervisors. > > > > > > > > > Repositories > > > > > > Kernel: https://github.com/01org/igvtg-kernel (2015q3-3.18.0 branch) > > > Xen: https://github.com/01org/igvtg-xen (2015q3-4.5 branch) > > > Qemu: https://github.com/01org/igvtg-qemu (xengt_public2015q3 branch) > > > > > > > > > This update consists of: > > > > > > - XenGT is now merged with KVMGT in unified repositories(kernel and qemu), but > > currently > > > different branches for qemu. XenGT and KVMGT share same iGVT-g core logic. > > > > Hi! > > > > At redhat we've been thinking about how to support vGPUs from multiple > > vendors in a common way within QEMU. We want to enable code sharing > > between vendors and give new vendors an easy path to add their own > > support. We also have the complication that not all vGPU vendors are as > > open source friendly as Intel, so being able to abstract the device > > mediation and access outside of QEMU is a big advantage. > > > > The proposal I'd like to make is that a vGPU, whether it is from Intel > > or another vendor, is predominantly a PCI(e) device. We have an > > interface in QEMU already for exposing arbitrary PCI devices, vfio-pci. > > Currently vfio-pci uses the VFIO API to interact with "physical" devices > > and system IOMMUs. I highlight /physical/ there because some of these > > physical devices are SR-IOV VFs, which is somewhat of a fuzzy concept, > > somewhere between fixed hardware and a virtual device implemented in > > software. That software just happens to be running on the physical > > endpoint. > > Agree. > > One clarification for rest discussion, is that we're talking about GVT-g vGPU > here which is a pure software GPU virtualization technique. GVT-d (note > some use in the text) refers to passing through the whole GPU or a specific > VF. GVT-d already falls into existing VFIO APIs nicely (though some on-going > effort to remove Intel specific platform stickness from gfx driver). :-) > > > > > vGPUs are similar, with the virtual device created at a different point, > > host software. They also rely on different IOMMU constructs, making use > > of the MMU capabilities of the GPU (GTTs and such), but really having > > similar requirements. > > One important difference between system IOMMU and GPU-MMU here. > System IOMMU is very much about translation from a DMA target > (IOVA on native, or GPA in virtualization case) to HPA. However GPU > internal MMUs is to translate from Graphics Memory Address (GMA) > to DMA target (HPA if system IOMMU is disabled, or IOVA/GPA if system > IOMMU is enabled). GMA is an internal addr space within GPU, not > exposed to Qemu and fully managed by GVT-g device model. Since it's > not a standard PCI defined resource, we don't need abstract this capability > in VFIO interface. > > > > > The proposal is therefore that GPU vendors can expose vGPUs to > > userspace, and thus to QEMU, using the VFIO API. For instance, vfio > > supports modular bus drivers and IOMMU drivers. An intel-vfio-gvt-d > > module (or extension of i915) can register as a vfio bus driver, create > > a struct device per vGPU, create an IOMMU group for that device, and > > register that device with the vfio-core. Since we don't rely on the > > system IOMMU for GVT-d vGPU assignment, another vGPU vendor driver (or > > extension of the same module) can register a "type1" compliant IOMMU > > driver into vfio-core. From the perspective of QEMU then, all of the > > existing vfio-pci code is re-used, QEMU remains largely unaware of any > > specifics of the vGPU being assigned, and the only necessary change so > > far is how QEMU traverses sysfs to find the device and thus the IOMMU > > group leading to the vfio group. > > GVT-g requires to pin guest memory and query GPA->HPA information, > upon which shadow GTTs will be updated accordingly from (GMA->GPA) > to (GMA->HPA). So yes, here a dummy or simple "type1" compliant IOMMU > can be introduced just for this requirement. > > However there's one tricky point which I'm not sure whether overall > VFIO concept will be violated. GVT-g doesn't require system IOMMU > to function, however host system may enable system IOMMU just for > hardening purpose. This means two-level translations existing (GMA-> > IOVA->HPA), so the dummy IOMMU driver has to request system IOMMU > driver to allocate IOVA for VMs and then setup IOVA->HPA mapping > in IOMMU page table. In this case, multiple VM's translations are > multiplexed in one IOMMU page table. > > We might need create some group/sub-group or parent/child concepts > among those IOMMUs for thorough permission control. My thought here is that this is all abstracted through the vGPU IOMMU and device vfio backends. It's the GPU driver itself, or some vfio extension of that driver, mediating access to the device and deciding when to configure GPU MMU mappings. That driver has access to the GPA to HVA translations thanks to the type1 complaint IOMMU it implements and can pin pages as needed to create GPA to HPA mappings. That should give it all the pieces it needs to fully setup mappings for the vGPU. Whether or not there's a system IOMMU is simply an exercise for that driver. It needs to do a DMA mapping operation through the system IOMMU the same for a vGPU as if it was doing it for itself, because they are in fact one in the same. The GMA to IOVA mapping seems like an internal detail. I assume the IOVA is some sort of GPA, and the GMA is managed through mediation of the device. > > There are a few areas where we know we'll need to extend the VFIO API to > > make this work, but it seems like they can all be done generically. One > > is that PCI BARs are described through the VFIO API as regions and each > > region has a single flag describing whether mmap (ie. direct mapping) of > > that region is possible. We expect that vGPUs likely need finer > > granularity, enabling some areas within a BAR to be trapped and fowarded > > as a read or write access for the vGPU-vfio-device module to emulate, > > while other regions, like framebuffers or texture regions, are directly > > mapped. I have prototype code to enable this already. > > Yes in GVT-g one BAR resource might be partitioned among multiple vGPUs. > If VFIO can support such partial resource assignment, it'd be great. Similar > parent/child concept might also be required here, so any resource enumerated > on a vGPU shouldn't break limitations enforced on the physical device. To be clear, I'm talking about partitioning of the BAR exposed to the guest. Partitioning of the physical BAR would be managed by the vGPU vfio device driver. For instance when the guest mmap's a section of the virtual BAR, the vGPU device driver would map that to a portion of the physical device BAR. > One unique requirement for GVT-g here, though, is that vGPU device model > need to know guest BAR configuration for proper emulation (e.g. register > IO emulation handler to KVM). Similar is about guest MSI vector for virtual > interrupt injection. Not sure how this can be fit into common VFIO model. > Does VFIO allow vendor specific extension today? As a vfio device driver all config accesses and interrupt configuration would be forwarded to you, so I don't see this being a problem. > > > > Another area is that we really don't want to proliferate each vGPU > > needing a new IOMMU type within vfio. The existing type1 IOMMU provides > > potentially the most simple mapping and unmapping interface possible. > > We'd therefore need to allow multiple "type1" IOMMU drivers for vfio, > > making type1 be more of an interface specification rather than a single > > implementation. This is a trivial change to make within vfio and one > > that I believe is compatible with the existing API. Note that > > implementing a type1-compliant vfio IOMMU does not imply pinning an > > mapping every registered page. A vGPU, with mediated device access, may > > use this only to track the current HVA to GPA mappings for a VM. Only > > when a DMA is enabled for the vGPU instance is that HVA pinned and an > > HPA to GPA translation programmed into the GPU MMU. > > > > Another area of extension is how to expose a framebuffer to QEMU for > > seamless integration into a SPICE/VNC channel. For this I believe we > > could use a new region, much like we've done to expose VGA access > > through a vfio device file descriptor. An area within this new > > framebuffer region could be directly mappable in QEMU while a > > non-mappable page, at a standard location with standardized format, > > provides a description of framebuffer and potentially even a > > communication channel to synchronize framebuffer captures. This would > > be new code for QEMU, but something we could share among all vGPU > > implementations. > > Now GVT-g already provides an interface to decode framebuffer information, > w/ an assumption that the framebuffer will be further composited into > OpenGL APIs. So the format is defined according to OpenGL definition. > Does that meet SPICE requirement? > > Another thing to be added. Framebuffers are frequently switched in > reality. So either Qemu needs to poll or a notification mechanism is required. > And since it's dynamic, having framebuffer page directly exposed in the > new region might be tricky. We can just expose framebuffer information > (including base, format, etc.) and let Qemu to map separately out of VFIO > interface. Sure, we'll need to work out that interface, but it's also possible that the framebuffer region is simply remapped to another area of the device (ie. multiple interfaces mapping the same thing) by the vfio device driver. Whether it's easier to do that or make the framebuffer region reference another region is something we'll need to see. > And... this works fine with vGPU model since software knows all the > detail about framebuffer. However in pass-through case, who do you expect > to provide that information? Is it OK to introduce vGPU specific APIs in > VFIO? Yes, vGPU may have additional features, like a framebuffer area, that aren't present or optional for direct assignment. Obviously we support direct assignment of GPUs for some vendors already without this feature. > > Another obvious area to be standardized would be how to discover, > > create, and destroy vGPU instances. SR-IOV has a standard mechanism to > > create VFs in sysfs and I would propose that vGPU vendors try to > > standardize on similar interfaces to enable libvirt to easily discover > > the vGPU capabilities of a given GPU and manage the lifecycle of a vGPU > > instance. > > Now there is no standard. We expose vGPU life-cycle mgmt. APIs through > sysfs (under i915 node), which is very Intel specific. In reality different > vendors have quite different capabilities for their own vGPUs, so not sure > how standard we can define such a mechanism. But this code should be > minor to be maintained in libvirt. Every difference is a barrier. I imagine we can come up with some basic interfaces that everyone could use, even if they don't allow fine tuning every detail specific to a vendor. > > This is obviously a lot to digest, but I'd certainly be interested in > > hearing feedback on this proposal as well as try to clarify anything > > I've left out or misrepresented above. Another benefit to this > > mechanism is that direct GPU assignment and vGPU assignment use the same > > code within QEMU and same API to the kernel, which should make debugging > > and code support between the two easier. I'd really like to start a > > discussion around this proposal, and of course the first open source > > implementation of this sort of model will really help to drive the > > direction it takes. Thanks! > > > > Thanks for starting this discussion. Intel will definitely work with > community on this work. Based on earlier comments, I'm not sure > whether we can exactly same code for direct GPU assignment and > vGPU assignment, since even we extend VFIO some interfaces might > be vGPU specific. Does this way still achieve your end goal? The backends will certainly be different for vGPU vs direct assignment, but hopefully the QEMU code is almost entirely reused, modulo some features like framebuffers that are likely only to be seen on vGPU. Thanks, Alex From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36533) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZzVPo-0004sa-Lw for qemu-devel@nongnu.org; Thu, 19 Nov 2015 15:02:46 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZzVPj-0000B8-OM for qemu-devel@nongnu.org; Thu, 19 Nov 2015 15:02:44 -0500 Received: from mx1.redhat.com ([209.132.183.28]:52722) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZzVPj-0000B1-Ej for qemu-devel@nongnu.org; Thu, 19 Nov 2015 15:02:39 -0500 Message-ID: <1447963356.4697.184.camel@redhat.com> From: Alex Williamson Date: Thu, 19 Nov 2015 13:02:36 -0700 In-Reply-To: References: <53D215D3.50608@intel.com> <547FCAAD.2060406@intel.com> <54AF967B.3060503@intel.com> <5527CEC4.9080700@intel.com> <559B3E38.1080707@intel.com> <562F4311.9@intel.com> <1447870341.4697.92.camel@redhat.com> Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Tian, Kevin" Cc: "igvt-g@ml01.01.org" , "Song, Jike" , "Reddy, Raghuveer" , qemu-devel , "White, Michael L" , "Cowperthwaite, David J" , "intel-gfx@lists.freedesktop.org" , "Li, Susie" , "Dong, Eddie" , "linux-kernel@vger.kernel.org" , "xen-devel@lists.xen.org" , Gerd Hoffmann , "Zhou, Chao" , Paolo Bonzini , "Zhu, Libo" , "Wang, Hongbo" , "Lv, Zhiyuan" Hi Kevin, On Thu, 2015-11-19 at 04:06 +0000, Tian, Kevin wrote: > > From: Alex Williamson [mailto:alex.williamson@redhat.com] > > Sent: Thursday, November 19, 2015 2:12 AM > > > > [cc +qemu-devel, +paolo, +gerd] > > > > On Tue, 2015-10-27 at 17:25 +0800, Jike Song wrote: > > > Hi all, > > > > > > We are pleased to announce another update of Intel GVT-g for Xen. > > > > > > Intel GVT-g is a full GPU virtualization solution with mediated > > > pass-through, starting from 4th generation Intel Core(TM) processors > > > with Intel Graphics processors. A virtual GPU instance is maintained > > > for each VM, with part of performance critical resources directly > > > assigned. The capability of running native graphics driver inside a > > > VM, without hypervisor intervention in performance critical paths, > > > achieves a good balance among performance, feature, and sharing > > > capability. Xen is currently supported on Intel Processor Graphics > > > (a.k.a. XenGT); and the core logic can be easily ported to other > > > hypervisors. > > > > > > > > > Repositories > > > > > > Kernel: https://github.com/01org/igvtg-kernel (2015q3-3.18.0 branch) > > > Xen: https://github.com/01org/igvtg-xen (2015q3-4.5 branch) > > > Qemu: https://github.com/01org/igvtg-qemu (xengt_public2015q3 branch) > > > > > > > > > This update consists of: > > > > > > - XenGT is now merged with KVMGT in unified repositories(kernel and qemu), but > > currently > > > different branches for qemu. XenGT and KVMGT share same iGVT-g core logic. > > > > Hi! > > > > At redhat we've been thinking about how to support vGPUs from multiple > > vendors in a common way within QEMU. We want to enable code sharing > > between vendors and give new vendors an easy path to add their own > > support. We also have the complication that not all vGPU vendors are as > > open source friendly as Intel, so being able to abstract the device > > mediation and access outside of QEMU is a big advantage. > > > > The proposal I'd like to make is that a vGPU, whether it is from Intel > > or another vendor, is predominantly a PCI(e) device. We have an > > interface in QEMU already for exposing arbitrary PCI devices, vfio-pci. > > Currently vfio-pci uses the VFIO API to interact with "physical" devices > > and system IOMMUs. I highlight /physical/ there because some of these > > physical devices are SR-IOV VFs, which is somewhat of a fuzzy concept, > > somewhere between fixed hardware and a virtual device implemented in > > software. That software just happens to be running on the physical > > endpoint. > > Agree. > > One clarification for rest discussion, is that we're talking about GVT-g vGPU > here which is a pure software GPU virtualization technique. GVT-d (note > some use in the text) refers to passing through the whole GPU or a specific > VF. GVT-d already falls into existing VFIO APIs nicely (though some on-going > effort to remove Intel specific platform stickness from gfx driver). :-) > > > > > vGPUs are similar, with the virtual device created at a different point, > > host software. They also rely on different IOMMU constructs, making use > > of the MMU capabilities of the GPU (GTTs and such), but really having > > similar requirements. > > One important difference between system IOMMU and GPU-MMU here. > System IOMMU is very much about translation from a DMA target > (IOVA on native, or GPA in virtualization case) to HPA. However GPU > internal MMUs is to translate from Graphics Memory Address (GMA) > to DMA target (HPA if system IOMMU is disabled, or IOVA/GPA if system > IOMMU is enabled). GMA is an internal addr space within GPU, not > exposed to Qemu and fully managed by GVT-g device model. Since it's > not a standard PCI defined resource, we don't need abstract this capability > in VFIO interface. > > > > > The proposal is therefore that GPU vendors can expose vGPUs to > > userspace, and thus to QEMU, using the VFIO API. For instance, vfio > > supports modular bus drivers and IOMMU drivers. An intel-vfio-gvt-d > > module (or extension of i915) can register as a vfio bus driver, create > > a struct device per vGPU, create an IOMMU group for that device, and > > register that device with the vfio-core. Since we don't rely on the > > system IOMMU for GVT-d vGPU assignment, another vGPU vendor driver (or > > extension of the same module) can register a "type1" compliant IOMMU > > driver into vfio-core. From the perspective of QEMU then, all of the > > existing vfio-pci code is re-used, QEMU remains largely unaware of any > > specifics of the vGPU being assigned, and the only necessary change so > > far is how QEMU traverses sysfs to find the device and thus the IOMMU > > group leading to the vfio group. > > GVT-g requires to pin guest memory and query GPA->HPA information, > upon which shadow GTTs will be updated accordingly from (GMA->GPA) > to (GMA->HPA). So yes, here a dummy or simple "type1" compliant IOMMU > can be introduced just for this requirement. > > However there's one tricky point which I'm not sure whether overall > VFIO concept will be violated. GVT-g doesn't require system IOMMU > to function, however host system may enable system IOMMU just for > hardening purpose. This means two-level translations existing (GMA-> > IOVA->HPA), so the dummy IOMMU driver has to request system IOMMU > driver to allocate IOVA for VMs and then setup IOVA->HPA mapping > in IOMMU page table. In this case, multiple VM's translations are > multiplexed in one IOMMU page table. > > We might need create some group/sub-group or parent/child concepts > among those IOMMUs for thorough permission control. My thought here is that this is all abstracted through the vGPU IOMMU and device vfio backends. It's the GPU driver itself, or some vfio extension of that driver, mediating access to the device and deciding when to configure GPU MMU mappings. That driver has access to the GPA to HVA translations thanks to the type1 complaint IOMMU it implements and can pin pages as needed to create GPA to HPA mappings. That should give it all the pieces it needs to fully setup mappings for the vGPU. Whether or not there's a system IOMMU is simply an exercise for that driver. It needs to do a DMA mapping operation through the system IOMMU the same for a vGPU as if it was doing it for itself, because they are in fact one in the same. The GMA to IOVA mapping seems like an internal detail. I assume the IOVA is some sort of GPA, and the GMA is managed through mediation of the device. > > There are a few areas where we know we'll need to extend the VFIO API to > > make this work, but it seems like they can all be done generically. One > > is that PCI BARs are described through the VFIO API as regions and each > > region has a single flag describing whether mmap (ie. direct mapping) of > > that region is possible. We expect that vGPUs likely need finer > > granularity, enabling some areas within a BAR to be trapped and fowarded > > as a read or write access for the vGPU-vfio-device module to emulate, > > while other regions, like framebuffers or texture regions, are directly > > mapped. I have prototype code to enable this already. > > Yes in GVT-g one BAR resource might be partitioned among multiple vGPUs. > If VFIO can support such partial resource assignment, it'd be great. Similar > parent/child concept might also be required here, so any resource enumerated > on a vGPU shouldn't break limitations enforced on the physical device. To be clear, I'm talking about partitioning of the BAR exposed to the guest. Partitioning of the physical BAR would be managed by the vGPU vfio device driver. For instance when the guest mmap's a section of the virtual BAR, the vGPU device driver would map that to a portion of the physical device BAR. > One unique requirement for GVT-g here, though, is that vGPU device model > need to know guest BAR configuration for proper emulation (e.g. register > IO emulation handler to KVM). Similar is about guest MSI vector for virtual > interrupt injection. Not sure how this can be fit into common VFIO model. > Does VFIO allow vendor specific extension today? As a vfio device driver all config accesses and interrupt configuration would be forwarded to you, so I don't see this being a problem. > > > > Another area is that we really don't want to proliferate each vGPU > > needing a new IOMMU type within vfio. The existing type1 IOMMU provides > > potentially the most simple mapping and unmapping interface possible. > > We'd therefore need to allow multiple "type1" IOMMU drivers for vfio, > > making type1 be more of an interface specification rather than a single > > implementation. This is a trivial change to make within vfio and one > > that I believe is compatible with the existing API. Note that > > implementing a type1-compliant vfio IOMMU does not imply pinning an > > mapping every registered page. A vGPU, with mediated device access, may > > use this only to track the current HVA to GPA mappings for a VM. Only > > when a DMA is enabled for the vGPU instance is that HVA pinned and an > > HPA to GPA translation programmed into the GPU MMU. > > > > Another area of extension is how to expose a framebuffer to QEMU for > > seamless integration into a SPICE/VNC channel. For this I believe we > > could use a new region, much like we've done to expose VGA access > > through a vfio device file descriptor. An area within this new > > framebuffer region could be directly mappable in QEMU while a > > non-mappable page, at a standard location with standardized format, > > provides a description of framebuffer and potentially even a > > communication channel to synchronize framebuffer captures. This would > > be new code for QEMU, but something we could share among all vGPU > > implementations. > > Now GVT-g already provides an interface to decode framebuffer information, > w/ an assumption that the framebuffer will be further composited into > OpenGL APIs. So the format is defined according to OpenGL definition. > Does that meet SPICE requirement? > > Another thing to be added. Framebuffers are frequently switched in > reality. So either Qemu needs to poll or a notification mechanism is required. > And since it's dynamic, having framebuffer page directly exposed in the > new region might be tricky. We can just expose framebuffer information > (including base, format, etc.) and let Qemu to map separately out of VFIO > interface. Sure, we'll need to work out that interface, but it's also possible that the framebuffer region is simply remapped to another area of the device (ie. multiple interfaces mapping the same thing) by the vfio device driver. Whether it's easier to do that or make the framebuffer region reference another region is something we'll need to see. > And... this works fine with vGPU model since software knows all the > detail about framebuffer. However in pass-through case, who do you expect > to provide that information? Is it OK to introduce vGPU specific APIs in > VFIO? Yes, vGPU may have additional features, like a framebuffer area, that aren't present or optional for direct assignment. Obviously we support direct assignment of GPUs for some vendors already without this feature. > > Another obvious area to be standardized would be how to discover, > > create, and destroy vGPU instances. SR-IOV has a standard mechanism to > > create VFs in sysfs and I would propose that vGPU vendors try to > > standardize on similar interfaces to enable libvirt to easily discover > > the vGPU capabilities of a given GPU and manage the lifecycle of a vGPU > > instance. > > Now there is no standard. We expose vGPU life-cycle mgmt. APIs through > sysfs (under i915 node), which is very Intel specific. In reality different > vendors have quite different capabilities for their own vGPUs, so not sure > how standard we can define such a mechanism. But this code should be > minor to be maintained in libvirt. Every difference is a barrier. I imagine we can come up with some basic interfaces that everyone could use, even if they don't allow fine tuning every detail specific to a vendor. > > This is obviously a lot to digest, but I'd certainly be interested in > > hearing feedback on this proposal as well as try to clarify anything > > I've left out or misrepresented above. Another benefit to this > > mechanism is that direct GPU assignment and vGPU assignment use the same > > code within QEMU and same API to the kernel, which should make debugging > > and code support between the two easier. I'd really like to start a > > discussion around this proposal, and of course the first open source > > implementation of this sort of model will really help to drive the > > direction it takes. Thanks! > > > > Thanks for starting this discussion. Intel will definitely work with > community on this work. Based on earlier comments, I'm not sure > whether we can exactly same code for direct GPU assignment and > vGPU assignment, since even we extend VFIO some interfaces might > be vGPU specific. Does this way still achieve your end goal? The backends will certainly be different for vGPU vs direct assignment, but hopefully the QEMU code is almost entirely reused, modulo some features like framebuffers that are likely only to be seen on vGPU. Thanks, Alex From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alex Williamson Subject: Re: [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel Date: Thu, 19 Nov 2015 13:02:36 -0700 Message-ID: <1447963356.4697.184.camel@redhat.com> References: <53D215D3.50608@intel.com> <547FCAAD.2060406@intel.com> <54AF967B.3060503@intel.com> <5527CEC4.9080700@intel.com> <559B3E38.1080707@intel.com> <562F4311.9@intel.com> <1447870341.4697.92.camel@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5EB756EA0C for ; Thu, 19 Nov 2015 12:02:39 -0800 (PST) In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" To: "Tian, Kevin" Cc: "igvt-g@ml01.01.org" , "Reddy, Raghuveer" , "White, Michael L" , "Cowperthwaite, David J" , "intel-gfx@lists.freedesktop.org" , "Li, Susie" , "Dong, Eddie" , "linux-kernel@vger.kernel.org" , "xen-devel@lists.xen.org" , qemu-devel , "Zhou, Chao" , Paolo Bonzini , "Zhu, Libo" , "Wang, Hongbo" List-Id: intel-gfx@lists.freedesktop.org SGkgS2V2aW4sCgpPbiBUaHUsIDIwMTUtMTEtMTkgYXQgMDQ6MDYgKzAwMDAsIFRpYW4sIEtldmlu IHdyb3RlOgo+ID4gRnJvbTogQWxleCBXaWxsaWFtc29uIFttYWlsdG86YWxleC53aWxsaWFtc29u QHJlZGhhdC5jb21dCj4gPiBTZW50OiBUaHVyc2RheSwgTm92ZW1iZXIgMTksIDIwMTUgMjoxMiBB TQo+ID4gCj4gPiBbY2MgK3FlbXUtZGV2ZWwsICtwYW9sbywgK2dlcmRdCj4gPiAKPiA+IE9uIFR1 ZSwgMjAxNS0xMC0yNyBhdCAxNzoyNSArMDgwMCwgSmlrZSBTb25nIHdyb3RlOgo+ID4gPiBIaSBh bGwsCj4gPiA+Cj4gPiA+IFdlIGFyZSBwbGVhc2VkIHRvIGFubm91bmNlIGFub3RoZXIgdXBkYXRl IG9mIEludGVsIEdWVC1nIGZvciBYZW4uCj4gPiA+Cj4gPiA+IEludGVsIEdWVC1nIGlzIGEgZnVs bCBHUFUgdmlydHVhbGl6YXRpb24gc29sdXRpb24gd2l0aCBtZWRpYXRlZAo+ID4gPiBwYXNzLXRo cm91Z2gsIHN0YXJ0aW5nIGZyb20gNHRoIGdlbmVyYXRpb24gSW50ZWwgQ29yZShUTSkgcHJvY2Vz c29ycwo+ID4gPiB3aXRoIEludGVsIEdyYXBoaWNzIHByb2Nlc3NvcnMuIEEgdmlydHVhbCBHUFUg aW5zdGFuY2UgaXMgbWFpbnRhaW5lZAo+ID4gPiBmb3IgZWFjaCBWTSwgd2l0aCBwYXJ0IG9mIHBl cmZvcm1hbmNlIGNyaXRpY2FsIHJlc291cmNlcyBkaXJlY3RseQo+ID4gPiBhc3NpZ25lZC4gVGhl IGNhcGFiaWxpdHkgb2YgcnVubmluZyBuYXRpdmUgZ3JhcGhpY3MgZHJpdmVyIGluc2lkZSBhCj4g PiA+IFZNLCB3aXRob3V0IGh5cGVydmlzb3IgaW50ZXJ2ZW50aW9uIGluIHBlcmZvcm1hbmNlIGNy aXRpY2FsIHBhdGhzLAo+ID4gPiBhY2hpZXZlcyBhIGdvb2QgYmFsYW5jZSBhbW9uZyBwZXJmb3Jt YW5jZSwgZmVhdHVyZSwgYW5kIHNoYXJpbmcKPiA+ID4gY2FwYWJpbGl0eS4gWGVuIGlzIGN1cnJl bnRseSBzdXBwb3J0ZWQgb24gSW50ZWwgUHJvY2Vzc29yIEdyYXBoaWNzCj4gPiA+IChhLmsuYS4g WGVuR1QpOyBhbmQgdGhlIGNvcmUgbG9naWMgY2FuIGJlIGVhc2lseSBwb3J0ZWQgdG8gb3RoZXIK PiA+ID4gaHlwZXJ2aXNvcnMuCj4gPiA+Cj4gPiA+Cj4gPiA+IFJlcG9zaXRvcmllcwo+ID4gPgo+ ID4gPiAgICAgIEtlcm5lbDogaHR0cHM6Ly9naXRodWIuY29tLzAxb3JnL2lndnRnLWtlcm5lbCAo MjAxNXEzLTMuMTguMCBicmFuY2gpCj4gPiA+ICAgICAgWGVuOiBodHRwczovL2dpdGh1Yi5jb20v MDFvcmcvaWd2dGcteGVuICgyMDE1cTMtNC41IGJyYW5jaCkKPiA+ID4gICAgICBRZW11OiBodHRw czovL2dpdGh1Yi5jb20vMDFvcmcvaWd2dGctcWVtdSAoeGVuZ3RfcHVibGljMjAxNXEzIGJyYW5j aCkKPiA+ID4KPiA+ID4KPiA+ID4gVGhpcyB1cGRhdGUgY29uc2lzdHMgb2Y6Cj4gPiA+Cj4gPiA+ ICAgICAgLSBYZW5HVCBpcyBub3cgbWVyZ2VkIHdpdGggS1ZNR1QgaW4gdW5pZmllZCByZXBvc2l0 b3JpZXMoa2VybmVsIGFuZCBxZW11KSwgYnV0Cj4gPiBjdXJyZW50bHkKPiA+ID4gICAgICAgIGRp ZmZlcmVudCBicmFuY2hlcyBmb3IgcWVtdS4gIFhlbkdUIGFuZCBLVk1HVCBzaGFyZSBzYW1lIGlH VlQtZyBjb3JlIGxvZ2ljLgo+ID4gCj4gPiBIaSEKPiA+IAo+ID4gQXQgcmVkaGF0IHdlJ3ZlIGJl ZW4gdGhpbmtpbmcgYWJvdXQgaG93IHRvIHN1cHBvcnQgdkdQVXMgZnJvbSBtdWx0aXBsZQo+ID4g dmVuZG9ycyBpbiBhIGNvbW1vbiB3YXkgd2l0aGluIFFFTVUuICBXZSB3YW50IHRvIGVuYWJsZSBj b2RlIHNoYXJpbmcKPiA+IGJldHdlZW4gdmVuZG9ycyBhbmQgZ2l2ZSBuZXcgdmVuZG9ycyBhbiBl YXN5IHBhdGggdG8gYWRkIHRoZWlyIG93bgo+ID4gc3VwcG9ydC4gIFdlIGFsc28gaGF2ZSB0aGUg Y29tcGxpY2F0aW9uIHRoYXQgbm90IGFsbCB2R1BVIHZlbmRvcnMgYXJlIGFzCj4gPiBvcGVuIHNv dXJjZSBmcmllbmRseSBhcyBJbnRlbCwgc28gYmVpbmcgYWJsZSB0byBhYnN0cmFjdCB0aGUgZGV2 aWNlCj4gPiBtZWRpYXRpb24gYW5kIGFjY2VzcyBvdXRzaWRlIG9mIFFFTVUgaXMgYSBiaWcgYWR2 YW50YWdlLgo+ID4gCj4gPiBUaGUgcHJvcG9zYWwgSSdkIGxpa2UgdG8gbWFrZSBpcyB0aGF0IGEg dkdQVSwgd2hldGhlciBpdCBpcyBmcm9tIEludGVsCj4gPiBvciBhbm90aGVyIHZlbmRvciwgaXMg cHJlZG9taW5hbnRseSBhIFBDSShlKSBkZXZpY2UuICBXZSBoYXZlIGFuCj4gPiBpbnRlcmZhY2Ug aW4gUUVNVSBhbHJlYWR5IGZvciBleHBvc2luZyBhcmJpdHJhcnkgUENJIGRldmljZXMsIHZmaW8t cGNpLgo+ID4gQ3VycmVudGx5IHZmaW8tcGNpIHVzZXMgdGhlIFZGSU8gQVBJIHRvIGludGVyYWN0 IHdpdGggInBoeXNpY2FsIiBkZXZpY2VzCj4gPiBhbmQgc3lzdGVtIElPTU1Vcy4gIEkgaGlnaGxp Z2h0IC9waHlzaWNhbC8gdGhlcmUgYmVjYXVzZSBzb21lIG9mIHRoZXNlCj4gPiBwaHlzaWNhbCBk ZXZpY2VzIGFyZSBTUi1JT1YgVkZzLCB3aGljaCBpcyBzb21ld2hhdCBvZiBhIGZ1enp5IGNvbmNl cHQsCj4gPiBzb21ld2hlcmUgYmV0d2VlbiBmaXhlZCBoYXJkd2FyZSBhbmQgYSB2aXJ0dWFsIGRl dmljZSBpbXBsZW1lbnRlZCBpbgo+ID4gc29mdHdhcmUuICBUaGF0IHNvZnR3YXJlIGp1c3QgaGFw cGVucyB0byBiZSBydW5uaW5nIG9uIHRoZSBwaHlzaWNhbAo+ID4gZW5kcG9pbnQuCj4gCj4gQWdy ZWUuIAo+IAo+IE9uZSBjbGFyaWZpY2F0aW9uIGZvciByZXN0IGRpc2N1c3Npb24sIGlzIHRoYXQg d2UncmUgdGFsa2luZyBhYm91dCBHVlQtZyB2R1BVIAo+IGhlcmUgd2hpY2ggaXMgYSBwdXJlIHNv ZnR3YXJlIEdQVSB2aXJ0dWFsaXphdGlvbiB0ZWNobmlxdWUuIEdWVC1kIChub3RlIAo+IHNvbWUg dXNlIGluIHRoZSB0ZXh0KSByZWZlcnMgdG8gcGFzc2luZyB0aHJvdWdoIHRoZSB3aG9sZSBHUFUg b3IgYSBzcGVjaWZpYyAKPiBWRi4gR1ZULWQgYWxyZWFkeSBmYWxscyBpbnRvIGV4aXN0aW5nIFZG SU8gQVBJcyBuaWNlbHkgKHRob3VnaCBzb21lIG9uLWdvaW5nCj4gZWZmb3J0IHRvIHJlbW92ZSBJ bnRlbCBzcGVjaWZpYyBwbGF0Zm9ybSBzdGlja25lc3MgZnJvbSBnZnggZHJpdmVyKS4gOi0pCj4g Cj4gPiAKPiA+IHZHUFVzIGFyZSBzaW1pbGFyLCB3aXRoIHRoZSB2aXJ0dWFsIGRldmljZSBjcmVh dGVkIGF0IGEgZGlmZmVyZW50IHBvaW50LAo+ID4gaG9zdCBzb2Z0d2FyZS4gIFRoZXkgYWxzbyBy ZWx5IG9uIGRpZmZlcmVudCBJT01NVSBjb25zdHJ1Y3RzLCBtYWtpbmcgdXNlCj4gPiBvZiB0aGUg TU1VIGNhcGFiaWxpdGllcyBvZiB0aGUgR1BVIChHVFRzIGFuZCBzdWNoKSwgYnV0IHJlYWxseSBo YXZpbmcKPiA+IHNpbWlsYXIgcmVxdWlyZW1lbnRzLgo+IAo+IE9uZSBpbXBvcnRhbnQgZGlmZmVy ZW5jZSBiZXR3ZWVuIHN5c3RlbSBJT01NVSBhbmQgR1BVLU1NVSBoZXJlLgo+IFN5c3RlbSBJT01N VSBpcyB2ZXJ5IG11Y2ggYWJvdXQgdHJhbnNsYXRpb24gZnJvbSBhIERNQSB0YXJnZXQKPiAoSU9W QSBvbiBuYXRpdmUsIG9yIEdQQSBpbiB2aXJ0dWFsaXphdGlvbiBjYXNlKSB0byBIUEEuIEhvd2V2 ZXIgR1BVCj4gaW50ZXJuYWwgTU1VcyBpcyB0byB0cmFuc2xhdGUgZnJvbSBHcmFwaGljcyBNZW1v cnkgQWRkcmVzcyAoR01BKQo+IHRvIERNQSB0YXJnZXQgKEhQQSBpZiBzeXN0ZW0gSU9NTVUgaXMg ZGlzYWJsZWQsIG9yIElPVkEvR1BBIGlmIHN5c3RlbQo+IElPTU1VIGlzIGVuYWJsZWQpLiBHTUEg aXMgYW4gaW50ZXJuYWwgYWRkciBzcGFjZSB3aXRoaW4gR1BVLCBub3QgCj4gZXhwb3NlZCB0byBR ZW11IGFuZCBmdWxseSBtYW5hZ2VkIGJ5IEdWVC1nIGRldmljZSBtb2RlbC4gU2luY2UgaXQncyAK PiBub3QgYSBzdGFuZGFyZCBQQ0kgZGVmaW5lZCByZXNvdXJjZSwgd2UgZG9uJ3QgbmVlZCBhYnN0 cmFjdCB0aGlzIGNhcGFiaWxpdHkKPiBpbiBWRklPIGludGVyZmFjZS4KPiAKPiA+IAo+ID4gVGhl IHByb3Bvc2FsIGlzIHRoZXJlZm9yZSB0aGF0IEdQVSB2ZW5kb3JzIGNhbiBleHBvc2UgdkdQVXMg dG8KPiA+IHVzZXJzcGFjZSwgYW5kIHRodXMgdG8gUUVNVSwgdXNpbmcgdGhlIFZGSU8gQVBJLiAg Rm9yIGluc3RhbmNlLCB2ZmlvCj4gPiBzdXBwb3J0cyBtb2R1bGFyIGJ1cyBkcml2ZXJzIGFuZCBJ T01NVSBkcml2ZXJzLiAgQW4gaW50ZWwtdmZpby1ndnQtZAo+ID4gbW9kdWxlIChvciBleHRlbnNp b24gb2YgaTkxNSkgY2FuIHJlZ2lzdGVyIGFzIGEgdmZpbyBidXMgZHJpdmVyLCBjcmVhdGUKPiA+ IGEgc3RydWN0IGRldmljZSBwZXIgdkdQVSwgY3JlYXRlIGFuIElPTU1VIGdyb3VwIGZvciB0aGF0 IGRldmljZSwgYW5kCj4gPiByZWdpc3RlciB0aGF0IGRldmljZSB3aXRoIHRoZSB2ZmlvLWNvcmUu ICBTaW5jZSB3ZSBkb24ndCByZWx5IG9uIHRoZQo+ID4gc3lzdGVtIElPTU1VIGZvciBHVlQtZCB2 R1BVIGFzc2lnbm1lbnQsIGFub3RoZXIgdkdQVSB2ZW5kb3IgZHJpdmVyIChvcgo+ID4gZXh0ZW5z aW9uIG9mIHRoZSBzYW1lIG1vZHVsZSkgY2FuIHJlZ2lzdGVyIGEgInR5cGUxIiBjb21wbGlhbnQg SU9NTVUKPiA+IGRyaXZlciBpbnRvIHZmaW8tY29yZS4gIEZyb20gdGhlIHBlcnNwZWN0aXZlIG9m IFFFTVUgdGhlbiwgYWxsIG9mIHRoZQo+ID4gZXhpc3RpbmcgdmZpby1wY2kgY29kZSBpcyByZS11 c2VkLCBRRU1VIHJlbWFpbnMgbGFyZ2VseSB1bmF3YXJlIG9mIGFueQo+ID4gc3BlY2lmaWNzIG9m IHRoZSB2R1BVIGJlaW5nIGFzc2lnbmVkLCBhbmQgdGhlIG9ubHkgbmVjZXNzYXJ5IGNoYW5nZSBz bwo+ID4gZmFyIGlzIGhvdyBRRU1VIHRyYXZlcnNlcyBzeXNmcyB0byBmaW5kIHRoZSBkZXZpY2Ug YW5kIHRodXMgdGhlIElPTU1VCj4gPiBncm91cCBsZWFkaW5nIHRvIHRoZSB2ZmlvIGdyb3VwLgo+ IAo+IEdWVC1nIHJlcXVpcmVzIHRvIHBpbiBndWVzdCBtZW1vcnkgYW5kIHF1ZXJ5IEdQQS0+SFBB IGluZm9ybWF0aW9uLAo+IHVwb24gd2hpY2ggc2hhZG93IEdUVHMgd2lsbCBiZSB1cGRhdGVkIGFj Y29yZGluZ2x5IGZyb20gKEdNQS0+R1BBKQo+IHRvIChHTUEtPkhQQSkuIFNvIHllcywgaGVyZSBh IGR1bW15IG9yIHNpbXBsZSAidHlwZTEiIGNvbXBsaWFudCBJT01NVSAKPiBjYW4gYmUgaW50cm9k dWNlZCBqdXN0IGZvciB0aGlzIHJlcXVpcmVtZW50Lgo+IAo+IEhvd2V2ZXIgdGhlcmUncyBvbmUg dHJpY2t5IHBvaW50IHdoaWNoIEknbSBub3Qgc3VyZSB3aGV0aGVyIG92ZXJhbGwKPiBWRklPIGNv bmNlcHQgd2lsbCBiZSB2aW9sYXRlZC4gR1ZULWcgZG9lc24ndCByZXF1aXJlIHN5c3RlbSBJT01N VQo+IHRvIGZ1bmN0aW9uLCBob3dldmVyIGhvc3Qgc3lzdGVtIG1heSBlbmFibGUgc3lzdGVtIElP TU1VIGp1c3QgZm9yIAo+IGhhcmRlbmluZyBwdXJwb3NlLiBUaGlzIG1lYW5zIHR3by1sZXZlbCB0 cmFuc2xhdGlvbnMgZXhpc3RpbmcgKEdNQS0+Cj4gSU9WQS0+SFBBKSwgc28gdGhlIGR1bW15IElP TU1VIGRyaXZlciBoYXMgdG8gcmVxdWVzdCBzeXN0ZW0gSU9NTVUgCj4gZHJpdmVyIHRvIGFsbG9j YXRlIElPVkEgZm9yIFZNcyBhbmQgdGhlbiBzZXR1cCBJT1ZBLT5IUEEgbWFwcGluZwo+IGluIElP TU1VIHBhZ2UgdGFibGUuIEluIHRoaXMgY2FzZSwgbXVsdGlwbGUgVk0ncyB0cmFuc2xhdGlvbnMg YXJlIAo+IG11bHRpcGxleGVkIGluIG9uZSBJT01NVSBwYWdlIHRhYmxlLgo+IAo+IFdlIG1pZ2h0 IG5lZWQgY3JlYXRlIHNvbWUgZ3JvdXAvc3ViLWdyb3VwIG9yIHBhcmVudC9jaGlsZCBjb25jZXB0 cwo+IGFtb25nIHRob3NlIElPTU1VcyBmb3IgdGhvcm91Z2ggcGVybWlzc2lvbiBjb250cm9sLgoK TXkgdGhvdWdodCBoZXJlIGlzIHRoYXQgdGhpcyBpcyBhbGwgYWJzdHJhY3RlZCB0aHJvdWdoIHRo ZSB2R1BVIElPTU1VCmFuZCBkZXZpY2UgdmZpbyBiYWNrZW5kcy4gIEl0J3MgdGhlIEdQVSBkcml2 ZXIgaXRzZWxmLCBvciBzb21lIHZmaW8KZXh0ZW5zaW9uIG9mIHRoYXQgZHJpdmVyLCBtZWRpYXRp bmcgYWNjZXNzIHRvIHRoZSBkZXZpY2UgYW5kIGRlY2lkaW5nCndoZW4gdG8gY29uZmlndXJlIEdQ VSBNTVUgbWFwcGluZ3MuICBUaGF0IGRyaXZlciBoYXMgYWNjZXNzIHRvIHRoZSBHUEEKdG8gSFZB IHRyYW5zbGF0aW9ucyB0aGFua3MgdG8gdGhlIHR5cGUxIGNvbXBsYWludCBJT01NVSBpdCBpbXBs ZW1lbnRzCmFuZCBjYW4gcGluIHBhZ2VzIGFzIG5lZWRlZCB0byBjcmVhdGUgR1BBIHRvIEhQQSBt YXBwaW5ncy4gIFRoYXQgc2hvdWxkCmdpdmUgaXQgYWxsIHRoZSBwaWVjZXMgaXQgbmVlZHMgdG8g ZnVsbHkgc2V0dXAgbWFwcGluZ3MgZm9yIHRoZSB2R1BVLgpXaGV0aGVyIG9yIG5vdCB0aGVyZSdz IGEgc3lzdGVtIElPTU1VIGlzIHNpbXBseSBhbiBleGVyY2lzZSBmb3IgdGhhdApkcml2ZXIuICBJ dCBuZWVkcyB0byBkbyBhIERNQSBtYXBwaW5nIG9wZXJhdGlvbiB0aHJvdWdoIHRoZSBzeXN0ZW0g SU9NTVUKdGhlIHNhbWUgZm9yIGEgdkdQVSBhcyBpZiBpdCB3YXMgZG9pbmcgaXQgZm9yIGl0c2Vs ZiwgYmVjYXVzZSB0aGV5IGFyZQppbiBmYWN0IG9uZSBpbiB0aGUgc2FtZS4gIFRoZSBHTUEgdG8g SU9WQSBtYXBwaW5nIHNlZW1zIGxpa2UgYW4gaW50ZXJuYWwKZGV0YWlsLiAgSSBhc3N1bWUgdGhl IElPVkEgaXMgc29tZSBzb3J0IG9mIEdQQSwgYW5kIHRoZSBHTUEgaXMgbWFuYWdlZAp0aHJvdWdo IG1lZGlhdGlvbiBvZiB0aGUgZGV2aWNlLgoKCj4gPiBUaGVyZSBhcmUgYSBmZXcgYXJlYXMgd2hl cmUgd2Uga25vdyB3ZSdsbCBuZWVkIHRvIGV4dGVuZCB0aGUgVkZJTyBBUEkgdG8KPiA+IG1ha2Ug dGhpcyB3b3JrLCBidXQgaXQgc2VlbXMgbGlrZSB0aGV5IGNhbiBhbGwgYmUgZG9uZSBnZW5lcmlj YWxseS4gIE9uZQo+ID4gaXMgdGhhdCBQQ0kgQkFScyBhcmUgZGVzY3JpYmVkIHRocm91Z2ggdGhl IFZGSU8gQVBJIGFzIHJlZ2lvbnMgYW5kIGVhY2gKPiA+IHJlZ2lvbiBoYXMgYSBzaW5nbGUgZmxh ZyBkZXNjcmliaW5nIHdoZXRoZXIgbW1hcCAoaWUuIGRpcmVjdCBtYXBwaW5nKSBvZgo+ID4gdGhh dCByZWdpb24gaXMgcG9zc2libGUuICBXZSBleHBlY3QgdGhhdCB2R1BVcyBsaWtlbHkgbmVlZCBm aW5lcgo+ID4gZ3JhbnVsYXJpdHksIGVuYWJsaW5nIHNvbWUgYXJlYXMgd2l0aGluIGEgQkFSIHRv IGJlIHRyYXBwZWQgYW5kIGZvd2FyZGVkCj4gPiBhcyBhIHJlYWQgb3Igd3JpdGUgYWNjZXNzIGZv ciB0aGUgdkdQVS12ZmlvLWRldmljZSBtb2R1bGUgdG8gZW11bGF0ZSwKPiA+IHdoaWxlIG90aGVy IHJlZ2lvbnMsIGxpa2UgZnJhbWVidWZmZXJzIG9yIHRleHR1cmUgcmVnaW9ucywgYXJlIGRpcmVj dGx5Cj4gPiBtYXBwZWQuICBJIGhhdmUgcHJvdG90eXBlIGNvZGUgdG8gZW5hYmxlIHRoaXMgYWxy ZWFkeS4KPiAKPiBZZXMgaW4gR1ZULWcgb25lIEJBUiByZXNvdXJjZSBtaWdodCBiZSBwYXJ0aXRp b25lZCBhbW9uZyBtdWx0aXBsZSB2R1BVcy4KPiBJZiBWRklPIGNhbiBzdXBwb3J0IHN1Y2ggcGFy dGlhbCByZXNvdXJjZSBhc3NpZ25tZW50LCBpdCdkIGJlIGdyZWF0LiBTaW1pbGFyCj4gcGFyZW50 L2NoaWxkIGNvbmNlcHQgbWlnaHQgYWxzbyBiZSByZXF1aXJlZCBoZXJlLCBzbyBhbnkgcmVzb3Vy Y2UgZW51bWVyYXRlZCAKPiBvbiBhIHZHUFUgc2hvdWxkbid0IGJyZWFrIGxpbWl0YXRpb25zIGVu Zm9yY2VkIG9uIHRoZSBwaHlzaWNhbCBkZXZpY2UuCgpUbyBiZSBjbGVhciwgSSdtIHRhbGtpbmcg YWJvdXQgcGFydGl0aW9uaW5nIG9mIHRoZSBCQVIgZXhwb3NlZCB0byB0aGUKZ3Vlc3QuICBQYXJ0 aXRpb25pbmcgb2YgdGhlIHBoeXNpY2FsIEJBUiB3b3VsZCBiZSBtYW5hZ2VkIGJ5IHRoZSB2R1BV CnZmaW8gZGV2aWNlIGRyaXZlci4gIEZvciBpbnN0YW5jZSB3aGVuIHRoZSBndWVzdCBtbWFwJ3Mg YSBzZWN0aW9uIG9mIHRoZQp2aXJ0dWFsIEJBUiwgdGhlIHZHUFUgZGV2aWNlIGRyaXZlciB3b3Vs ZCBtYXAgdGhhdCB0byBhIHBvcnRpb24gb2YgdGhlCnBoeXNpY2FsIGRldmljZSBCQVIuCgo+IE9u ZSB1bmlxdWUgcmVxdWlyZW1lbnQgZm9yIEdWVC1nIGhlcmUsIHRob3VnaCwgaXMgdGhhdCB2R1BV IGRldmljZSBtb2RlbAo+IG5lZWQgdG8ga25vdyBndWVzdCBCQVIgY29uZmlndXJhdGlvbiBmb3Ig cHJvcGVyIGVtdWxhdGlvbiAoZS5nLiByZWdpc3Rlcgo+IElPIGVtdWxhdGlvbiBoYW5kbGVyIHRv IEtWTSkuIFNpbWlsYXIgaXMgYWJvdXQgZ3Vlc3QgTVNJIHZlY3RvciBmb3IgdmlydHVhbCAKPiBp bnRlcnJ1cHQgaW5qZWN0aW9uLiBOb3Qgc3VyZSBob3cgdGhpcyBjYW4gYmUgZml0IGludG8gY29t bW9uIFZGSU8gbW9kZWwuIAo+IERvZXMgVkZJTyBhbGxvdyB2ZW5kb3Igc3BlY2lmaWMgZXh0ZW5z aW9uIHRvZGF5PwoKQXMgYSB2ZmlvIGRldmljZSBkcml2ZXIgYWxsIGNvbmZpZyBhY2Nlc3NlcyBh bmQgaW50ZXJydXB0IGNvbmZpZ3VyYXRpb24Kd291bGQgYmUgZm9yd2FyZGVkIHRvIHlvdSwgc28g SSBkb24ndCBzZWUgdGhpcyBiZWluZyBhIHByb2JsZW0uCgo+ID4gCj4gPiBBbm90aGVyIGFyZWEg aXMgdGhhdCB3ZSByZWFsbHkgZG9uJ3Qgd2FudCB0byBwcm9saWZlcmF0ZSBlYWNoIHZHUFUKPiA+ IG5lZWRpbmcgYSBuZXcgSU9NTVUgdHlwZSB3aXRoaW4gdmZpby4gIFRoZSBleGlzdGluZyB0eXBl MSBJT01NVSBwcm92aWRlcwo+ID4gcG90ZW50aWFsbHkgdGhlIG1vc3Qgc2ltcGxlIG1hcHBpbmcg YW5kIHVubWFwcGluZyBpbnRlcmZhY2UgcG9zc2libGUuCj4gPiBXZSdkIHRoZXJlZm9yZSBuZWVk IHRvIGFsbG93IG11bHRpcGxlICJ0eXBlMSIgSU9NTVUgZHJpdmVycyBmb3IgdmZpbywKPiA+IG1h a2luZyB0eXBlMSBiZSBtb3JlIG9mIGFuIGludGVyZmFjZSBzcGVjaWZpY2F0aW9uIHJhdGhlciB0 aGFuIGEgc2luZ2xlCj4gPiBpbXBsZW1lbnRhdGlvbi4gIFRoaXMgaXMgYSB0cml2aWFsIGNoYW5n ZSB0byBtYWtlIHdpdGhpbiB2ZmlvIGFuZCBvbmUKPiA+IHRoYXQgSSBiZWxpZXZlIGlzIGNvbXBh dGlibGUgd2l0aCB0aGUgZXhpc3RpbmcgQVBJLiAgTm90ZSB0aGF0Cj4gPiBpbXBsZW1lbnRpbmcg YSB0eXBlMS1jb21wbGlhbnQgdmZpbyBJT01NVSBkb2VzIG5vdCBpbXBseSBwaW5uaW5nIGFuCj4g PiBtYXBwaW5nIGV2ZXJ5IHJlZ2lzdGVyZWQgcGFnZS4gIEEgdkdQVSwgd2l0aCBtZWRpYXRlZCBk ZXZpY2UgYWNjZXNzLCBtYXkKPiA+IHVzZSB0aGlzIG9ubHkgdG8gdHJhY2sgdGhlIGN1cnJlbnQg SFZBIHRvIEdQQSBtYXBwaW5ncyBmb3IgYSBWTS4gIE9ubHkKPiA+IHdoZW4gYSBETUEgaXMgZW5h YmxlZCBmb3IgdGhlIHZHUFUgaW5zdGFuY2UgaXMgdGhhdCBIVkEgcGlubmVkIGFuZCBhbgo+ID4g SFBBIHRvIEdQQSB0cmFuc2xhdGlvbiBwcm9ncmFtbWVkIGludG8gdGhlIEdQVSBNTVUuCj4gPiAK PiA+IEFub3RoZXIgYXJlYSBvZiBleHRlbnNpb24gaXMgaG93IHRvIGV4cG9zZSBhIGZyYW1lYnVm ZmVyIHRvIFFFTVUgZm9yCj4gPiBzZWFtbGVzcyBpbnRlZ3JhdGlvbiBpbnRvIGEgU1BJQ0UvVk5D IGNoYW5uZWwuICBGb3IgdGhpcyBJIGJlbGlldmUgd2UKPiA+IGNvdWxkIHVzZSBhIG5ldyByZWdp b24sIG11Y2ggbGlrZSB3ZSd2ZSBkb25lIHRvIGV4cG9zZSBWR0EgYWNjZXNzCj4gPiB0aHJvdWdo IGEgdmZpbyBkZXZpY2UgZmlsZSBkZXNjcmlwdG9yLiAgQW4gYXJlYSB3aXRoaW4gdGhpcyBuZXcK PiA+IGZyYW1lYnVmZmVyIHJlZ2lvbiBjb3VsZCBiZSBkaXJlY3RseSBtYXBwYWJsZSBpbiBRRU1V IHdoaWxlIGEKPiA+IG5vbi1tYXBwYWJsZSBwYWdlLCBhdCBhIHN0YW5kYXJkIGxvY2F0aW9uIHdp dGggc3RhbmRhcmRpemVkIGZvcm1hdCwKPiA+IHByb3ZpZGVzIGEgZGVzY3JpcHRpb24gb2YgZnJh bWVidWZmZXIgYW5kIHBvdGVudGlhbGx5IGV2ZW4gYQo+ID4gY29tbXVuaWNhdGlvbiBjaGFubmVs IHRvIHN5bmNocm9uaXplIGZyYW1lYnVmZmVyIGNhcHR1cmVzLiAgVGhpcyB3b3VsZAo+ID4gYmUg bmV3IGNvZGUgZm9yIFFFTVUsIGJ1dCBzb21ldGhpbmcgd2UgY291bGQgc2hhcmUgYW1vbmcgYWxs IHZHUFUKPiA+IGltcGxlbWVudGF0aW9ucy4KPiAKPiBOb3cgR1ZULWcgYWxyZWFkeSBwcm92aWRl cyBhbiBpbnRlcmZhY2UgdG8gZGVjb2RlIGZyYW1lYnVmZmVyIGluZm9ybWF0aW9uLAo+IHcvIGFu IGFzc3VtcHRpb24gdGhhdCB0aGUgZnJhbWVidWZmZXIgd2lsbCBiZSBmdXJ0aGVyIGNvbXBvc2l0 ZWQgaW50byAKPiBPcGVuR0wgQVBJcy4gU28gdGhlIGZvcm1hdCBpcyBkZWZpbmVkIGFjY29yZGlu ZyB0byBPcGVuR0wgZGVmaW5pdGlvbi4KPiBEb2VzIHRoYXQgbWVldCBTUElDRSByZXF1aXJlbWVu dD8KPiAKPiBBbm90aGVyIHRoaW5nIHRvIGJlIGFkZGVkLiBGcmFtZWJ1ZmZlcnMgYXJlIGZyZXF1 ZW50bHkgc3dpdGNoZWQgaW4KPiByZWFsaXR5LiBTbyBlaXRoZXIgUWVtdSBuZWVkcyB0byBwb2xs IG9yIGEgbm90aWZpY2F0aW9uIG1lY2hhbmlzbSBpcyByZXF1aXJlZC4KPiBBbmQgc2luY2UgaXQn cyBkeW5hbWljLCBoYXZpbmcgZnJhbWVidWZmZXIgcGFnZSBkaXJlY3RseSBleHBvc2VkIGluIHRo ZQo+IG5ldyByZWdpb24gbWlnaHQgYmUgdHJpY2t5LiBXZSBjYW4ganVzdCBleHBvc2UgZnJhbWVi dWZmZXIgaW5mb3JtYXRpb24KPiAoaW5jbHVkaW5nIGJhc2UsIGZvcm1hdCwgZXRjLikgYW5kIGxl dCBRZW11IHRvIG1hcCBzZXBhcmF0ZWx5IG91dCBvZiBWRklPCj4gaW50ZXJmYWNlLgoKU3VyZSwg d2UnbGwgbmVlZCB0byB3b3JrIG91dCB0aGF0IGludGVyZmFjZSwgYnV0IGl0J3MgYWxzbyBwb3Nz aWJsZSB0aGF0CnRoZSBmcmFtZWJ1ZmZlciByZWdpb24gaXMgc2ltcGx5IHJlbWFwcGVkIHRvIGFu b3RoZXIgYXJlYSBvZiB0aGUgZGV2aWNlCihpZS4gbXVsdGlwbGUgaW50ZXJmYWNlcyBtYXBwaW5n IHRoZSBzYW1lIHRoaW5nKSBieSB0aGUgdmZpbyBkZXZpY2UKZHJpdmVyLiAgV2hldGhlciBpdCdz IGVhc2llciB0byBkbyB0aGF0IG9yIG1ha2UgdGhlIGZyYW1lYnVmZmVyIHJlZ2lvbgpyZWZlcmVu Y2UgYW5vdGhlciByZWdpb24gaXMgc29tZXRoaW5nIHdlJ2xsIG5lZWQgdG8gc2VlLgoKPiBBbmQu Li4gdGhpcyB3b3JrcyBmaW5lIHdpdGggdkdQVSBtb2RlbCBzaW5jZSBzb2Z0d2FyZSBrbm93cyBh bGwgdGhlCj4gZGV0YWlsIGFib3V0IGZyYW1lYnVmZmVyLiBIb3dldmVyIGluIHBhc3MtdGhyb3Vn aCBjYXNlLCB3aG8gZG8geW91IGV4cGVjdAo+IHRvIHByb3ZpZGUgdGhhdCBpbmZvcm1hdGlvbj8g SXMgaXQgT0sgdG8gaW50cm9kdWNlIHZHUFUgc3BlY2lmaWMgQVBJcyBpbgo+IFZGSU8/CgpZZXMs IHZHUFUgbWF5IGhhdmUgYWRkaXRpb25hbCBmZWF0dXJlcywgbGlrZSBhIGZyYW1lYnVmZmVyIGFy ZWEsIHRoYXQKYXJlbid0IHByZXNlbnQgb3Igb3B0aW9uYWwgZm9yIGRpcmVjdCBhc3NpZ25tZW50 LiAgT2J2aW91c2x5IHdlIHN1cHBvcnQKZGlyZWN0IGFzc2lnbm1lbnQgb2YgR1BVcyBmb3Igc29t ZSB2ZW5kb3JzIGFscmVhZHkgd2l0aG91dCB0aGlzIGZlYXR1cmUuCgo+ID4gQW5vdGhlciBvYnZp b3VzIGFyZWEgdG8gYmUgc3RhbmRhcmRpemVkIHdvdWxkIGJlIGhvdyB0byBkaXNjb3ZlciwKPiA+ IGNyZWF0ZSwgYW5kIGRlc3Ryb3kgdkdQVSBpbnN0YW5jZXMuICBTUi1JT1YgaGFzIGEgc3RhbmRh cmQgbWVjaGFuaXNtIHRvCj4gPiBjcmVhdGUgVkZzIGluIHN5c2ZzIGFuZCBJIHdvdWxkIHByb3Bv c2UgdGhhdCB2R1BVIHZlbmRvcnMgdHJ5IHRvCj4gPiBzdGFuZGFyZGl6ZSBvbiBzaW1pbGFyIGlu dGVyZmFjZXMgdG8gZW5hYmxlIGxpYnZpcnQgdG8gZWFzaWx5IGRpc2NvdmVyCj4gPiB0aGUgdkdQ VSBjYXBhYmlsaXRpZXMgb2YgYSBnaXZlbiBHUFUgYW5kIG1hbmFnZSB0aGUgbGlmZWN5Y2xlIG9m IGEgdkdQVQo+ID4gaW5zdGFuY2UuCj4gCj4gTm93IHRoZXJlIGlzIG5vIHN0YW5kYXJkLiBXZSBl eHBvc2UgdkdQVSBsaWZlLWN5Y2xlIG1nbXQuIEFQSXMgdGhyb3VnaAo+IHN5c2ZzICh1bmRlciBp OTE1IG5vZGUpLCB3aGljaCBpcyB2ZXJ5IEludGVsIHNwZWNpZmljLiBJbiByZWFsaXR5IGRpZmZl cmVudAo+IHZlbmRvcnMgaGF2ZSBxdWl0ZSBkaWZmZXJlbnQgY2FwYWJpbGl0aWVzIGZvciB0aGVp ciBvd24gdkdQVXMsIHNvIG5vdCBzdXJlCj4gaG93IHN0YW5kYXJkIHdlIGNhbiBkZWZpbmUgc3Vj aCBhIG1lY2hhbmlzbS4gQnV0IHRoaXMgY29kZSBzaG91bGQgYmUKPiBtaW5vciB0byBiZSBtYWlu dGFpbmVkIGluIGxpYnZpcnQuCgpFdmVyeSBkaWZmZXJlbmNlIGlzIGEgYmFycmllci4gIEkgaW1h Z2luZSB3ZSBjYW4gY29tZSB1cCB3aXRoIHNvbWUgYmFzaWMKaW50ZXJmYWNlcyB0aGF0IGV2ZXJ5 b25lIGNvdWxkIHVzZSwgZXZlbiBpZiB0aGV5IGRvbid0IGFsbG93IGZpbmUgdHVuaW5nCmV2ZXJ5 IGRldGFpbCBzcGVjaWZpYyB0byBhIHZlbmRvci4KCj4gPiBUaGlzIGlzIG9idmlvdXNseSBhIGxv dCB0byBkaWdlc3QsIGJ1dCBJJ2QgY2VydGFpbmx5IGJlIGludGVyZXN0ZWQgaW4KPiA+IGhlYXJp bmcgZmVlZGJhY2sgb24gdGhpcyBwcm9wb3NhbCBhcyB3ZWxsIGFzIHRyeSB0byBjbGFyaWZ5IGFu eXRoaW5nCj4gPiBJJ3ZlIGxlZnQgb3V0IG9yIG1pc3JlcHJlc2VudGVkIGFib3ZlLiAgQW5vdGhl ciBiZW5lZml0IHRvIHRoaXMKPiA+IG1lY2hhbmlzbSBpcyB0aGF0IGRpcmVjdCBHUFUgYXNzaWdu bWVudCBhbmQgdkdQVSBhc3NpZ25tZW50IHVzZSB0aGUgc2FtZQo+ID4gY29kZSB3aXRoaW4gUUVN VSBhbmQgc2FtZSBBUEkgdG8gdGhlIGtlcm5lbCwgd2hpY2ggc2hvdWxkIG1ha2UgZGVidWdnaW5n Cj4gPiBhbmQgY29kZSBzdXBwb3J0IGJldHdlZW4gdGhlIHR3byBlYXNpZXIuICBJJ2QgcmVhbGx5 IGxpa2UgdG8gc3RhcnQgYQo+ID4gZGlzY3Vzc2lvbiBhcm91bmQgdGhpcyBwcm9wb3NhbCwgYW5k IG9mIGNvdXJzZSB0aGUgZmlyc3Qgb3BlbiBzb3VyY2UKPiA+IGltcGxlbWVudGF0aW9uIG9mIHRo aXMgc29ydCBvZiBtb2RlbCB3aWxsIHJlYWxseSBoZWxwIHRvIGRyaXZlIHRoZQo+ID4gZGlyZWN0 aW9uIGl0IHRha2VzLiAgVGhhbmtzIQo+ID4gCj4gCj4gVGhhbmtzIGZvciBzdGFydGluZyB0aGlz IGRpc2N1c3Npb24uIEludGVsIHdpbGwgZGVmaW5pdGVseSB3b3JrIHdpdGggCj4gY29tbXVuaXR5 IG9uIHRoaXMgd29yay4gQmFzZWQgb24gZWFybGllciBjb21tZW50cywgSSdtIG5vdCBzdXJlCj4g d2hldGhlciB3ZSBjYW4gZXhhY3RseSBzYW1lIGNvZGUgZm9yIGRpcmVjdCBHUFUgYXNzaWdubWVu dCBhbmQKPiB2R1BVIGFzc2lnbm1lbnQsIHNpbmNlIGV2ZW4gd2UgZXh0ZW5kIFZGSU8gc29tZSBp bnRlcmZhY2VzIG1pZ2h0Cj4gYmUgdkdQVSBzcGVjaWZpYy4gRG9lcyB0aGlzIHdheSBzdGlsbCBh Y2hpZXZlIHlvdXIgZW5kIGdvYWw/CgpUaGUgYmFja2VuZHMgd2lsbCBjZXJ0YWlubHkgYmUgZGlm ZmVyZW50IGZvciB2R1BVIHZzIGRpcmVjdCBhc3NpZ25tZW50LApidXQgaG9wZWZ1bGx5IHRoZSBR RU1VIGNvZGUgaXMgYWxtb3N0IGVudGlyZWx5IHJldXNlZCwgbW9kdWxvIHNvbWUKZmVhdHVyZXMg bGlrZSBmcmFtZWJ1ZmZlcnMgdGhhdCBhcmUgbGlrZWx5IG9ubHkgdG8gYmUgc2VlbiBvbiB2R1BV LgpUaGFua3MsCgpBbGV4CgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fXwpJbnRlbC1nZnggbWFpbGluZyBsaXN0CkludGVsLWdmeEBsaXN0cy5mcmVlZGVza3Rv cC5vcmcKaHR0cDovL2xpc3RzLmZyZWVkZXNrdG9wLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2ludGVs LWdmeAo=