From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756094AbbBCQ6q (ORCPT ); Tue, 3 Feb 2015 11:58:46 -0500 Received: from pandora.arm.linux.org.uk ([78.32.30.218]:45695 "EHLO pandora.arm.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753949AbbBCQ6n (ORCPT ); Tue, 3 Feb 2015 11:58:43 -0500 Date: Tue, 3 Feb 2015 16:58:29 +0000 From: Russell King - ARM Linux To: Arnd Bergmann Cc: linaro-mm-sig@lists.linaro.org, Linaro Kernel Mailman List , Robin Murphy , LKML , DRI mailing list , "linux-mm@kvack.org" , Rob Clark , Daniel Vetter , Tomasz Stanislawski , linux-arm-kernel@lists.infradead.org, "linux-media@vger.kernel.org" Subject: Re: [Linaro-mm-sig] [RFCv3 2/2] dma-buf: add helpers for sharing attacher constraints with dma-parms Message-ID: <20150203165829.GW8656@n2100.arm.linux.org.uk> References: <1422347154-15258-1-git-send-email-sumit.semwal@linaro.org> <3783167.LiVXgA35gN@wuerfel> <20150203155404.GV8656@n2100.arm.linux.org.uk> <6906596.JU5vQoa1jV@wuerfel> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6906596.JU5vQoa1jV@wuerfel> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 03, 2015 at 05:12:40PM +0100, Arnd Bergmann wrote: > On Tuesday 03 February 2015 15:54:04 Russell King - ARM Linux wrote: > > On Tue, Feb 03, 2015 at 04:31:13PM +0100, Arnd Bergmann wrote: > > > The dma_map_* interfaces assign the virtual addresses internally, > > > using typically either a global address space for all devices, or one > > > address space per device. > > > > We shouldn't be doing one address space per device for precisely this > > reason. We should be doing one address space per *bus*. I did have > > a nice diagram to illustrate the point in my previous email, but I > > deleted it, I wish I hadn't... briefly: > > > > Fig. 1. > > +------------------+ > > |+-----+ device | > > CPU--L1cache--L2cache--Memory--SysMMU-------IOMMU--> | > > |+-----+ | > > +------------------+ > > > > Fig.1 represents what I'd call the "GPU" issue that we're talking about > > in this thread. > > > > Fig. 2. > > CPU--L1cache--L2cache--Memory--SysMMU-----IOMMU--device > > > > The DMA API should be responsible (at the very least) for everything on > > the left of "" in and should be providing a dma_addr_t which is > > representative of what the device (in Fig.1) as a whole sees. That's > > the "system" part. > > > > I believe this is the approach which is taken by x86 and similar platforms, > > simply because they tend not to have an IOMMU on individual devices (and > > if they did, eg, on a PCI card, it's clearly the responsibility of the > > device driver.) > > > > Whether the DMA API also handles the IOMMU in Fig.1 or 2 is questionable. > > For fig.2, it is entirely possible that the same device could appear > > without an IOMMU, and in that scenario, you would want the IOMMU to be > > handled transparently. > > > > However, by doing so for everything, you run into exactly the problem > > which is being discussed here - the need to separate out the cache > > coherency from the IOMMU aspects. You probably also have a setup very > > similar to fig.1 (which is certainly true of Vivante GPUs.) > > > > If you have the need to separately control both, then using the DMA API > > to encapsulate both does not make sense - at which point, the DMA API > > should be responsible for the minimum only - in other words, everything > > to the left of (so including the system MMU.) The control of > > the device IOMMU should be the responsibility of device driver in this > > case. > > > > So, dma_map_sg() would be responsible for dealing with the CPU cache > > coherency issues, and setting up the system MMU. dma_sync_*() would > > be responsible for the CPU cache coherency issues, and dma_unmap_sg() > > would (again) deal with the CPU cache and tear down the system MMU > > mappings. > > > > Meanwhile, the device driver has ultimate control over its IOMMU, the > > creation and destruction of mappings and context switches at the > > appropriate times. > > I agree for the case you are describing here. From what I understood > from Rob was that he is looking at something more like: > > Fig 3 > CPU--L1cache--L2cache--Memory--IOMMU-----device > > where the IOMMU controls one or more contexts per device, and is > shared across GPU and non-GPU devices. Here, we need to use the > dmap-mapping interface to set up the IO page table for any device > that is unable to address all of system RAM, and we can use it > for purposes like isolation of the devices. There are also cases > where using the IOMMU is not optional. Okay, but switching contexts is not something which the DMA API has any knowledge of (so it can't know which context to associate with which mapping.) While it knows which device, it has no knowledge (nor is there any way for it to gain knowledge) about contexts. My personal view is that extending the DMA API in this way feels quite dirty - it's a violation of the DMA API design, which is to (a) demark the buffer ownership between CPU and DMA agent, and (b) to translate buffer locations into a cookie which device drivers can use to instruct their device to access that memory. To see why, consider... that you map a buffer to a device in context A, and then you switch to context B, which means the dma_addr_t given previously is no longer valid. You then try to unmap it... which is normally done using the (now no longer valid) dma_addr_t. It seems to me that to support this at DMA API level, we would need to completely revamp the DMA API, which IMHO isn't going to be nice. (It would mean that we end up with three APIs - the original PCI DMA API, the existing DMA API, and some new DMA API.) Do we have any views on how common this feature is? -- FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up according to speedtest.net. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-f179.google.com (mail-we0-f179.google.com [74.125.82.179]) by kanga.kvack.org (Postfix) with ESMTP id 559066B0038 for ; Tue, 3 Feb 2015 11:58:51 -0500 (EST) Received: by mail-we0-f179.google.com with SMTP id q59so46142197wes.10 for ; Tue, 03 Feb 2015 08:58:50 -0800 (PST) Received: from pandora.arm.linux.org.uk ([2001:4d48:ad52:3201:214:fdff:fe10:1be6]) by mx.google.com with ESMTPS id aq8si43744864wjc.174.2015.02.03.08.58.47 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 03 Feb 2015 08:58:48 -0800 (PST) Date: Tue, 3 Feb 2015 16:58:29 +0000 From: Russell King - ARM Linux Subject: Re: [Linaro-mm-sig] [RFCv3 2/2] dma-buf: add helpers for sharing attacher constraints with dma-parms Message-ID: <20150203165829.GW8656@n2100.arm.linux.org.uk> References: <1422347154-15258-1-git-send-email-sumit.semwal@linaro.org> <3783167.LiVXgA35gN@wuerfel> <20150203155404.GV8656@n2100.arm.linux.org.uk> <6906596.JU5vQoa1jV@wuerfel> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6906596.JU5vQoa1jV@wuerfel> Sender: owner-linux-mm@kvack.org List-ID: To: Arnd Bergmann Cc: linaro-mm-sig@lists.linaro.org, Linaro Kernel Mailman List , Robin Murphy , LKML , DRI mailing list , "linux-mm@kvack.org" , Rob Clark , Daniel Vetter , Tomasz Stanislawski , linux-arm-kernel@lists.infradead.org, "linux-media@vger.kernel.org" On Tue, Feb 03, 2015 at 05:12:40PM +0100, Arnd Bergmann wrote: > On Tuesday 03 February 2015 15:54:04 Russell King - ARM Linux wrote: > > On Tue, Feb 03, 2015 at 04:31:13PM +0100, Arnd Bergmann wrote: > > > The dma_map_* interfaces assign the virtual addresses internally, > > > using typically either a global address space for all devices, or one > > > address space per device. > > > > We shouldn't be doing one address space per device for precisely this > > reason. We should be doing one address space per *bus*. I did have > > a nice diagram to illustrate the point in my previous email, but I > > deleted it, I wish I hadn't... briefly: > > > > Fig. 1. > > +------------------+ > > |+-----+ device | > > CPU--L1cache--L2cache--Memory--SysMMU-------IOMMU--> | > > |+-----+ | > > +------------------+ > > > > Fig.1 represents what I'd call the "GPU" issue that we're talking about > > in this thread. > > > > Fig. 2. > > CPU--L1cache--L2cache--Memory--SysMMU-----IOMMU--device > > > > The DMA API should be responsible (at the very least) for everything on > > the left of "" in and should be providing a dma_addr_t which is > > representative of what the device (in Fig.1) as a whole sees. That's > > the "system" part. > > > > I believe this is the approach which is taken by x86 and similar platforms, > > simply because they tend not to have an IOMMU on individual devices (and > > if they did, eg, on a PCI card, it's clearly the responsibility of the > > device driver.) > > > > Whether the DMA API also handles the IOMMU in Fig.1 or 2 is questionable. > > For fig.2, it is entirely possible that the same device could appear > > without an IOMMU, and in that scenario, you would want the IOMMU to be > > handled transparently. > > > > However, by doing so for everything, you run into exactly the problem > > which is being discussed here - the need to separate out the cache > > coherency from the IOMMU aspects. You probably also have a setup very > > similar to fig.1 (which is certainly true of Vivante GPUs.) > > > > If you have the need to separately control both, then using the DMA API > > to encapsulate both does not make sense - at which point, the DMA API > > should be responsible for the minimum only - in other words, everything > > to the left of (so including the system MMU.) The control of > > the device IOMMU should be the responsibility of device driver in this > > case. > > > > So, dma_map_sg() would be responsible for dealing with the CPU cache > > coherency issues, and setting up the system MMU. dma_sync_*() would > > be responsible for the CPU cache coherency issues, and dma_unmap_sg() > > would (again) deal with the CPU cache and tear down the system MMU > > mappings. > > > > Meanwhile, the device driver has ultimate control over its IOMMU, the > > creation and destruction of mappings and context switches at the > > appropriate times. > > I agree for the case you are describing here. From what I understood > from Rob was that he is looking at something more like: > > Fig 3 > CPU--L1cache--L2cache--Memory--IOMMU-----device > > where the IOMMU controls one or more contexts per device, and is > shared across GPU and non-GPU devices. Here, we need to use the > dmap-mapping interface to set up the IO page table for any device > that is unable to address all of system RAM, and we can use it > for purposes like isolation of the devices. There are also cases > where using the IOMMU is not optional. Okay, but switching contexts is not something which the DMA API has any knowledge of (so it can't know which context to associate with which mapping.) While it knows which device, it has no knowledge (nor is there any way for it to gain knowledge) about contexts. My personal view is that extending the DMA API in this way feels quite dirty - it's a violation of the DMA API design, which is to (a) demark the buffer ownership between CPU and DMA agent, and (b) to translate buffer locations into a cookie which device drivers can use to instruct their device to access that memory. To see why, consider... that you map a buffer to a device in context A, and then you switch to context B, which means the dma_addr_t given previously is no longer valid. You then try to unmap it... which is normally done using the (now no longer valid) dma_addr_t. It seems to me that to support this at DMA API level, we would need to completely revamp the DMA API, which IMHO isn't going to be nice. (It would mean that we end up with three APIs - the original PCI DMA API, the existing DMA API, and some new DMA API.) Do we have any views on how common this feature is? -- FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up according to speedtest.net. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 From: linux@arm.linux.org.uk (Russell King - ARM Linux) Date: Tue, 3 Feb 2015 16:58:29 +0000 Subject: [Linaro-mm-sig] [RFCv3 2/2] dma-buf: add helpers for sharing attacher constraints with dma-parms In-Reply-To: <6906596.JU5vQoa1jV@wuerfel> References: <1422347154-15258-1-git-send-email-sumit.semwal@linaro.org> <3783167.LiVXgA35gN@wuerfel> <20150203155404.GV8656@n2100.arm.linux.org.uk> <6906596.JU5vQoa1jV@wuerfel> Message-ID: <20150203165829.GW8656@n2100.arm.linux.org.uk> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Tue, Feb 03, 2015 at 05:12:40PM +0100, Arnd Bergmann wrote: > On Tuesday 03 February 2015 15:54:04 Russell King - ARM Linux wrote: > > On Tue, Feb 03, 2015 at 04:31:13PM +0100, Arnd Bergmann wrote: > > > The dma_map_* interfaces assign the virtual addresses internally, > > > using typically either a global address space for all devices, or one > > > address space per device. > > > > We shouldn't be doing one address space per device for precisely this > > reason. We should be doing one address space per *bus*. I did have > > a nice diagram to illustrate the point in my previous email, but I > > deleted it, I wish I hadn't... briefly: > > > > Fig. 1. > > +------------------+ > > |+-----+ device | > > CPU--L1cache--L2cache--Memory--SysMMU-------IOMMU--> | > > |+-----+ | > > +------------------+ > > > > Fig.1 represents what I'd call the "GPU" issue that we're talking about > > in this thread. > > > > Fig. 2. > > CPU--L1cache--L2cache--Memory--SysMMU-----IOMMU--device > > > > The DMA API should be responsible (at the very least) for everything on > > the left of "" in and should be providing a dma_addr_t which is > > representative of what the device (in Fig.1) as a whole sees. That's > > the "system" part. > > > > I believe this is the approach which is taken by x86 and similar platforms, > > simply because they tend not to have an IOMMU on individual devices (and > > if they did, eg, on a PCI card, it's clearly the responsibility of the > > device driver.) > > > > Whether the DMA API also handles the IOMMU in Fig.1 or 2 is questionable. > > For fig.2, it is entirely possible that the same device could appear > > without an IOMMU, and in that scenario, you would want the IOMMU to be > > handled transparently. > > > > However, by doing so for everything, you run into exactly the problem > > which is being discussed here - the need to separate out the cache > > coherency from the IOMMU aspects. You probably also have a setup very > > similar to fig.1 (which is certainly true of Vivante GPUs.) > > > > If you have the need to separately control both, then using the DMA API > > to encapsulate both does not make sense - at which point, the DMA API > > should be responsible for the minimum only - in other words, everything > > to the left of (so including the system MMU.) The control of > > the device IOMMU should be the responsibility of device driver in this > > case. > > > > So, dma_map_sg() would be responsible for dealing with the CPU cache > > coherency issues, and setting up the system MMU. dma_sync_*() would > > be responsible for the CPU cache coherency issues, and dma_unmap_sg() > > would (again) deal with the CPU cache and tear down the system MMU > > mappings. > > > > Meanwhile, the device driver has ultimate control over its IOMMU, the > > creation and destruction of mappings and context switches at the > > appropriate times. > > I agree for the case you are describing here. From what I understood > from Rob was that he is looking at something more like: > > Fig 3 > CPU--L1cache--L2cache--Memory--IOMMU-----device > > where the IOMMU controls one or more contexts per device, and is > shared across GPU and non-GPU devices. Here, we need to use the > dmap-mapping interface to set up the IO page table for any device > that is unable to address all of system RAM, and we can use it > for purposes like isolation of the devices. There are also cases > where using the IOMMU is not optional. Okay, but switching contexts is not something which the DMA API has any knowledge of (so it can't know which context to associate with which mapping.) While it knows which device, it has no knowledge (nor is there any way for it to gain knowledge) about contexts. My personal view is that extending the DMA API in this way feels quite dirty - it's a violation of the DMA API design, which is to (a) demark the buffer ownership between CPU and DMA agent, and (b) to translate buffer locations into a cookie which device drivers can use to instruct their device to access that memory. To see why, consider... that you map a buffer to a device in context A, and then you switch to context B, which means the dma_addr_t given previously is no longer valid. You then try to unmap it... which is normally done using the (now no longer valid) dma_addr_t. It seems to me that to support this at DMA API level, we would need to completely revamp the DMA API, which IMHO isn't going to be nice. (It would mean that we end up with three APIs - the original PCI DMA API, the existing DMA API, and some new DMA API.) Do we have any views on how common this feature is? -- FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up according to speedtest.net. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Russell King - ARM Linux Subject: Re: [Linaro-mm-sig] [RFCv3 2/2] dma-buf: add helpers for sharing attacher constraints with dma-parms Date: Tue, 3 Feb 2015 16:58:29 +0000 Message-ID: <20150203165829.GW8656@n2100.arm.linux.org.uk> References: <1422347154-15258-1-git-send-email-sumit.semwal@linaro.org> <3783167.LiVXgA35gN@wuerfel> <20150203155404.GV8656@n2100.arm.linux.org.uk> <6906596.JU5vQoa1jV@wuerfel> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Received: from pandora.arm.linux.org.uk (pandora.arm.linux.org.uk [78.32.30.218]) by gabe.freedesktop.org (Postfix) with ESMTP id 0A8076E11B for ; Tue, 3 Feb 2015 08:58:44 -0800 (PST) Content-Disposition: inline In-Reply-To: <6906596.JU5vQoa1jV@wuerfel> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To: Arnd Bergmann Cc: Linaro Kernel Mailman List , Tomasz Stanislawski , LKML , DRI mailing list , linaro-mm-sig@lists.linaro.org, "linux-mm@kvack.org" , Robin Murphy , linux-arm-kernel@lists.infradead.org, "linux-media@vger.kernel.org" List-Id: dri-devel@lists.freedesktop.org T24gVHVlLCBGZWIgMDMsIDIwMTUgYXQgMDU6MTI6NDBQTSArMDEwMCwgQXJuZCBCZXJnbWFubiB3 cm90ZToKPiBPbiBUdWVzZGF5IDAzIEZlYnJ1YXJ5IDIwMTUgMTU6NTQ6MDQgUnVzc2VsbCBLaW5n IC0gQVJNIExpbnV4IHdyb3RlOgo+ID4gT24gVHVlLCBGZWIgMDMsIDIwMTUgYXQgMDQ6MzE6MTNQ TSArMDEwMCwgQXJuZCBCZXJnbWFubiB3cm90ZToKPiA+ID4gVGhlIGRtYV9tYXBfKiBpbnRlcmZh Y2VzIGFzc2lnbiB0aGUgdmlydHVhbCBhZGRyZXNzZXMgaW50ZXJuYWxseSwKPiA+ID4gdXNpbmcg dHlwaWNhbGx5IGVpdGhlciBhIGdsb2JhbCBhZGRyZXNzIHNwYWNlIGZvciBhbGwgZGV2aWNlcywg b3Igb25lCj4gPiA+IGFkZHJlc3Mgc3BhY2UgcGVyIGRldmljZS4KPiA+IAo+ID4gV2Ugc2hvdWxk bid0IGJlIGRvaW5nIG9uZSBhZGRyZXNzIHNwYWNlIHBlciBkZXZpY2UgZm9yIHByZWNpc2VseSB0 aGlzCj4gPiByZWFzb24uICBXZSBzaG91bGQgYmUgZG9pbmcgb25lIGFkZHJlc3Mgc3BhY2UgcGVy ICpidXMqLiAgSSBkaWQgaGF2ZQo+ID4gYSBuaWNlIGRpYWdyYW0gdG8gaWxsdXN0cmF0ZSB0aGUg cG9pbnQgaW4gbXkgcHJldmlvdXMgZW1haWwsIGJ1dCBJCj4gPiBkZWxldGVkIGl0LCBJIHdpc2gg SSBoYWRuJ3QuLi4gYnJpZWZseToKPiA+IAo+ID4gRmlnLiAxLgo+ID4gICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICstLS0tLS0tLS0tLS0tLS0tLS0rCj4g PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCstLS0t LSsgIGRldmljZSAgIHwKPiA+IENQVS0tTDFjYWNoZS0tTDJjYWNoZS0tTWVtb3J5LS1TeXNNTVUt LS08aW9idXM+LS0tLUlPTU1VLS0+ICAgICAgICAgfAo+ID4gICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwrLS0tLS0rICAgICAgICAgICB8Cj4gPiAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKy0tLS0tLS0tLS0t LS0tLS0tLSsKPiA+IAo+ID4gRmlnLjEgcmVwcmVzZW50cyB3aGF0IEknZCBjYWxsIHRoZSAiR1BV IiBpc3N1ZSB0aGF0IHdlJ3JlIHRhbGtpbmcgYWJvdXQKPiA+IGluIHRoaXMgdGhyZWFkLgo+ID4g Cj4gPiBGaWcuIDIuCj4gPiBDUFUtLUwxY2FjaGUtLUwyY2FjaGUtLU1lbW9yeS0tU3lzTU1VLS0t PGlvYnVzPi0tSU9NTVUtLWRldmljZQo+ID4gCj4gPiBUaGUgRE1BIEFQSSBzaG91bGQgYmUgcmVz cG9uc2libGUgKGF0IHRoZSB2ZXJ5IGxlYXN0KSBmb3IgZXZlcnl0aGluZyBvbgo+ID4gdGhlIGxl ZnQgb2YgIjxpb2J1cz4iIGluIGFuZCBzaG91bGQgYmUgcHJvdmlkaW5nIGEgZG1hX2FkZHJfdCB3 aGljaCBpcwo+ID4gcmVwcmVzZW50YXRpdmUgb2Ygd2hhdCB0aGUgZGV2aWNlIChpbiBGaWcuMSkg YXMgYSB3aG9sZSBzZWVzLiAgVGhhdCdzCj4gPiB0aGUgInN5c3RlbSIgcGFydC4gIAo+ID4gCj4g PiBJIGJlbGlldmUgdGhpcyBpcyB0aGUgYXBwcm9hY2ggd2hpY2ggaXMgdGFrZW4gYnkgeDg2IGFu ZCBzaW1pbGFyIHBsYXRmb3JtcywKPiA+IHNpbXBseSBiZWNhdXNlIHRoZXkgdGVuZCBub3QgdG8g aGF2ZSBhbiBJT01NVSBvbiBpbmRpdmlkdWFsIGRldmljZXMgKGFuZAo+ID4gaWYgdGhleSBkaWQs IGVnLCBvbiBhIFBDSSBjYXJkLCBpdCdzIGNsZWFybHkgdGhlIHJlc3BvbnNpYmlsaXR5IG9mIHRo ZQo+ID4gZGV2aWNlIGRyaXZlci4pCj4gPiAKPiA+IFdoZXRoZXIgdGhlIERNQSBBUEkgYWxzbyBo YW5kbGVzIHRoZSBJT01NVSBpbiBGaWcuMSBvciAyIGlzIHF1ZXN0aW9uYWJsZS4KPiA+IEZvciBm aWcuMiwgaXQgaXMgZW50aXJlbHkgcG9zc2libGUgdGhhdCB0aGUgc2FtZSBkZXZpY2UgY291bGQg YXBwZWFyCj4gPiB3aXRob3V0IGFuIElPTU1VLCBhbmQgaW4gdGhhdCBzY2VuYXJpbywgeW91IHdv dWxkIHdhbnQgdGhlIElPTU1VIHRvIGJlCj4gPiBoYW5kbGVkIHRyYW5zcGFyZW50bHkuCj4gPiAK PiA+IEhvd2V2ZXIsIGJ5IGRvaW5nIHNvIGZvciBldmVyeXRoaW5nLCB5b3UgcnVuIGludG8gZXhh Y3RseSB0aGUgcHJvYmxlbQo+ID4gd2hpY2ggaXMgYmVpbmcgZGlzY3Vzc2VkIGhlcmUgLSB0aGUg bmVlZCB0byBzZXBhcmF0ZSBvdXQgdGhlIGNhY2hlCj4gPiBjb2hlcmVuY3kgZnJvbSB0aGUgSU9N TVUgYXNwZWN0cy4gIFlvdSBwcm9iYWJseSBhbHNvIGhhdmUgYSBzZXR1cCB2ZXJ5Cj4gPiBzaW1p bGFyIHRvIGZpZy4xICh3aGljaCBpcyBjZXJ0YWlubHkgdHJ1ZSBvZiBWaXZhbnRlIEdQVXMuKQo+ ID4gCj4gPiBJZiB5b3UgaGF2ZSB0aGUgbmVlZCB0byBzZXBhcmF0ZWx5IGNvbnRyb2wgYm90aCwg dGhlbiB1c2luZyB0aGUgRE1BIEFQSQo+ID4gdG8gZW5jYXBzdWxhdGUgYm90aCBkb2VzIG5vdCBt YWtlIHNlbnNlIC0gYXQgd2hpY2ggcG9pbnQsIHRoZSBETUEgQVBJCj4gPiBzaG91bGQgYmUgcmVz cG9uc2libGUgZm9yIHRoZSBtaW5pbXVtIG9ubHkgLSBpbiBvdGhlciB3b3JkcywgZXZlcnl0aGlu Zwo+ID4gdG8gdGhlIGxlZnQgb2YgPGlvYnVzPiAoc28gaW5jbHVkaW5nIHRoZSBzeXN0ZW0gTU1V LikgIFRoZSBjb250cm9sIG9mCj4gPiB0aGUgZGV2aWNlIElPTU1VIHNob3VsZCBiZSB0aGUgcmVz cG9uc2liaWxpdHkgb2YgZGV2aWNlIGRyaXZlciBpbiB0aGlzCj4gPiBjYXNlLgo+ID4gCj4gPiBT bywgZG1hX21hcF9zZygpIHdvdWxkIGJlIHJlc3BvbnNpYmxlIGZvciBkZWFsaW5nIHdpdGggdGhl IENQVSBjYWNoZQo+ID4gY29oZXJlbmN5IGlzc3VlcywgYW5kIHNldHRpbmcgdXAgdGhlIHN5c3Rl bSBNTVUuICBkbWFfc3luY18qKCkgd291bGQKPiA+IGJlIHJlc3BvbnNpYmxlIGZvciB0aGUgQ1BV IGNhY2hlIGNvaGVyZW5jeSBpc3N1ZXMsIGFuZCBkbWFfdW5tYXBfc2coKQo+ID4gd291bGQgKGFn YWluKSBkZWFsIHdpdGggdGhlIENQVSBjYWNoZSBhbmQgdGVhciBkb3duIHRoZSBzeXN0ZW0gTU1V Cj4gPiBtYXBwaW5ncy4KPiA+IAo+ID4gTWVhbndoaWxlLCB0aGUgZGV2aWNlIGRyaXZlciBoYXMg dWx0aW1hdGUgY29udHJvbCBvdmVyIGl0cyBJT01NVSwgdGhlCj4gPiBjcmVhdGlvbiBhbmQgZGVz dHJ1Y3Rpb24gb2YgbWFwcGluZ3MgYW5kIGNvbnRleHQgc3dpdGNoZXMgYXQgdGhlCj4gPiBhcHBy b3ByaWF0ZSB0aW1lcy4KPiAKPiBJIGFncmVlIGZvciB0aGUgY2FzZSB5b3UgYXJlIGRlc2NyaWJp bmcgaGVyZS4gRnJvbSB3aGF0IEkgdW5kZXJzdG9vZAo+IGZyb20gUm9iIHdhcyB0aGF0IGhlIGlz IGxvb2tpbmcgYXQgc29tZXRoaW5nIG1vcmUgbGlrZToKPiAKPiBGaWcgMwo+IENQVS0tTDFjYWNo ZS0tTDJjYWNoZS0tTWVtb3J5LS1JT01NVS0tLTxpb2J1cz4tLWRldmljZQo+IAo+IHdoZXJlIHRo ZSBJT01NVSBjb250cm9scyBvbmUgb3IgbW9yZSBjb250ZXh0cyBwZXIgZGV2aWNlLCBhbmQgaXMK PiBzaGFyZWQgYWNyb3NzIEdQVSBhbmQgbm9uLUdQVSBkZXZpY2VzLiBIZXJlLCB3ZSBuZWVkIHRv IHVzZSB0aGUKPiBkbWFwLW1hcHBpbmcgaW50ZXJmYWNlIHRvIHNldCB1cCB0aGUgSU8gcGFnZSB0 YWJsZSBmb3IgYW55IGRldmljZQo+IHRoYXQgaXMgdW5hYmxlIHRvIGFkZHJlc3MgYWxsIG9mIHN5 c3RlbSBSQU0sIGFuZCB3ZSBjYW4gdXNlIGl0Cj4gZm9yIHB1cnBvc2VzIGxpa2UgaXNvbGF0aW9u IG9mIHRoZSBkZXZpY2VzLiBUaGVyZSBhcmUgYWxzbyBjYXNlcwo+IHdoZXJlIHVzaW5nIHRoZSBJ T01NVSBpcyBub3Qgb3B0aW9uYWwuCgpPa2F5LCBidXQgc3dpdGNoaW5nIGNvbnRleHRzIGlzIG5v dCBzb21ldGhpbmcgd2hpY2ggdGhlIERNQSBBUEkgaGFzCmFueSBrbm93bGVkZ2Ugb2YgKHNvIGl0 IGNhbid0IGtub3cgd2hpY2ggY29udGV4dCB0byBhc3NvY2lhdGUgd2l0aAp3aGljaCBtYXBwaW5n LikgIFdoaWxlIGl0IGtub3dzIHdoaWNoIGRldmljZSwgaXQgaGFzIG5vIGtub3dsZWRnZQoobm9y IGlzIHRoZXJlIGFueSB3YXkgZm9yIGl0IHRvIGdhaW4ga25vd2xlZGdlKSBhYm91dCBjb250ZXh0 cy4KCk15IHBlcnNvbmFsIHZpZXcgaXMgdGhhdCBleHRlbmRpbmcgdGhlIERNQSBBUEkgaW4gdGhp cyB3YXkgZmVlbHMgcXVpdGUKZGlydHkgLSBpdCdzIGEgdmlvbGF0aW9uIG9mIHRoZSBETUEgQVBJ IGRlc2lnbiwgd2hpY2ggaXMgdG8gKGEpIGRlbWFyawp0aGUgYnVmZmVyIG93bmVyc2hpcCBiZXR3 ZWVuIENQVSBhbmQgRE1BIGFnZW50LCBhbmQgKGIpIHRvIHRyYW5zbGF0ZQpidWZmZXIgbG9jYXRp b25zIGludG8gYSBjb29raWUgd2hpY2ggZGV2aWNlIGRyaXZlcnMgY2FuIHVzZSB0byBpbnN0cnVj dAp0aGVpciBkZXZpY2UgdG8gYWNjZXNzIHRoYXQgbWVtb3J5LiAgVG8gc2VlIHdoeSwgY29uc2lk ZXIuLi4gdGhhdCB5b3UKbWFwIGEgYnVmZmVyIHRvIGEgZGV2aWNlIGluIGNvbnRleHQgQSwgYW5k IHRoZW4geW91IHN3aXRjaCB0byBjb250ZXh0IEIsCndoaWNoIG1lYW5zIHRoZSBkbWFfYWRkcl90 IGdpdmVuIHByZXZpb3VzbHkgaXMgbm8gbG9uZ2VyIHZhbGlkLiAgWW91CnRoZW4gdHJ5IHRvIHVu bWFwIGl0Li4uIHdoaWNoIGlzIG5vcm1hbGx5IGRvbmUgdXNpbmcgdGhlIChub3cgbm8gbG9uZ2Vy CnZhbGlkKSBkbWFfYWRkcl90LgoKSXQgc2VlbXMgdG8gbWUgdGhhdCB0byBzdXBwb3J0IHRoaXMg YXQgRE1BIEFQSSBsZXZlbCwgd2Ugd291bGQgbmVlZCB0bwpjb21wbGV0ZWx5IHJldmFtcCB0aGUg RE1BIEFQSSwgd2hpY2ggSU1ITyBpc24ndCBnb2luZyB0byBiZSBuaWNlLiAgKEl0CndvdWxkIG1l YW4gdGhhdCB3ZSBlbmQgdXAgd2l0aCB0aHJlZSBBUElzIC0gdGhlIG9yaWdpbmFsIFBDSSBETUEg QVBJLAp0aGUgZXhpc3RpbmcgRE1BIEFQSSwgYW5kIHNvbWUgbmV3IERNQSBBUEkuKQoKRG8gd2Ug aGF2ZSBhbnkgdmlld3Mgb24gaG93IGNvbW1vbiB0aGlzIGZlYXR1cmUgaXM/CgotLSAKRlRUQyBi cm9hZGJhbmQgZm9yIDAuOG1pbGUgbGluZTogY3VycmVudGx5IGF0IDEwLjVNYnBzIGRvd24gNDAw a2JwcyB1cAphY2NvcmRpbmcgdG8gc3BlZWR0ZXN0Lm5ldC4KX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX18KZHJpLWRldmVsIG1haWxpbmcgbGlzdApkcmktZGV2 ZWxAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHA6Ly9saXN0cy5mcmVlZGVza3RvcC5vcmcvbWFp bG1hbi9saXN0aW5mby9kcmktZGV2ZWwK