From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757153AbbA2SwP (ORCPT ); Thu, 29 Jan 2015 13:52:15 -0500 Received: from mail-ie0-f181.google.com ([209.85.223.181]:33793 "EHLO mail-ie0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756458AbbA2SwL (ORCPT ); Thu, 29 Jan 2015 13:52:11 -0500 MIME-Version: 1.0 In-Reply-To: <20150129154718.GB26493@n2100.arm.linux.org.uk> References: <1422347154-15258-1-git-send-email-sumit.semwal@linaro.org> <1422347154-15258-2-git-send-email-sumit.semwal@linaro.org> <20150129143908.GA26493@n2100.arm.linux.org.uk> <20150129154718.GB26493@n2100.arm.linux.org.uk> Date: Thu, 29 Jan 2015 13:52:09 -0500 Message-ID: Subject: Re: [RFCv3 2/2] dma-buf: add helpers for sharing attacher constraints with dma-parms From: Rob Clark To: Russell King - ARM Linux Cc: Sumit Semwal , LKML , "linux-media@vger.kernel.org" , DRI mailing list , Linaro MM SIG Mailman List , "linux-arm-kernel@lists.infradead.org" , "linux-mm@kvack.org" , Linaro Kernel Mailman List , Tomasz Stanislawski , Daniel Vetter , Robin Murphy , Marek Szyprowski Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 29, 2015 at 10:47 AM, Russell King - ARM Linux wrote: > On Thu, Jan 29, 2015 at 09:00:11PM +0530, Sumit Semwal wrote: >> So, short answer is, it is left to the exporter to decide. The dma-buf >> framework should not even attempt to decide or enforce any of the >> above. >> >> At each dma_buf_attach(), there's a callback to the exporter, where >> the exporter can decide, if it intends to handle these kind of cases, >> on the best way forward. >> >> The exporter might, for example, decide to migrate backing storage, > > That's a decision which the exporter can not take. Think about it... > > If subsystem Y has mapped the buffer, it could be accessing the buffer's > backing storage at the same time that subsystem Z tries to attach to the > buffer. The *theory* is that Y is map/unmap'ing the buffer around each use, so there will be some point where things could be migrated and remapped.. in practice, I am not sure that anyone is doing this yet. Probably it would be reasonable if a more restrictive subsystem tried to attach after the buffer was already allocated and mapped in a way that don't meet the new constraints, then -EBUSY. But from a quick look it seems like there needs to be a slight fixup to not return 0 if calc_constraints() fails.. > Once the buffer has been exported to another user, the exporter has > effectively lost control over mediating accesses to that buffer. > > All that it can do with the way the dma-buf API is today is to allocate > a _different_ scatter list pointing at the same backing storage which > satisfies the segment size and number of segments, etc. > > There's also another issue which you haven't addressed. What if several > attachments result in lowering max_segment_size and max_segment_count > such that: > > max_segment_size * max_segment_count < dmabuf->size > > but individually, the attachments allow dmabuf->size to be represented > as a scatterlist? Quite possibly for some of these edge some of cases, some of the dma-buf exporters are going to need to get more clever (ie. hand off different scatterlists to different clients). Although I think by far the two common cases will be "I can support anything via an iommu/mmu" and "I need phys contig". But that isn't an issue w/ dma-buf itself, so much as it is an issue w/ drivers. I guess there would be more interest in fixing up drivers when actual hw comes along that needs it.. BR, -R > If an exporter were to take notice of the max_segment_size and > max_segment_count, the resulting buffer is basically unrepresentable > as a scatterlist. > >> > Please consider the possible sequences of use (such as the scenario >> > above) when creating or augmenting an API. >> > >> >> I tried to think of the scenarios I could think of, but If you still >> feel this approach doesn't help with your concerns, I'll graciously >> accept advice to improve it. > > See the new one above :) > > -- > FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up > according to speedtest.net. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-path: Received: from mail-ie0-f181.google.com ([209.85.223.181]:33793 "EHLO mail-ie0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756458AbbA2SwL (ORCPT ); Thu, 29 Jan 2015 13:52:11 -0500 MIME-Version: 1.0 In-Reply-To: <20150129154718.GB26493@n2100.arm.linux.org.uk> References: <1422347154-15258-1-git-send-email-sumit.semwal@linaro.org> <1422347154-15258-2-git-send-email-sumit.semwal@linaro.org> <20150129143908.GA26493@n2100.arm.linux.org.uk> <20150129154718.GB26493@n2100.arm.linux.org.uk> Date: Thu, 29 Jan 2015 13:52:09 -0500 Message-ID: Subject: Re: [RFCv3 2/2] dma-buf: add helpers for sharing attacher constraints with dma-parms From: Rob Clark To: Russell King - ARM Linux Cc: Sumit Semwal , LKML , "linux-media@vger.kernel.org" , DRI mailing list , Linaro MM SIG Mailman List , "linux-arm-kernel@lists.infradead.org" , "linux-mm@kvack.org" , Linaro Kernel Mailman List , Tomasz Stanislawski , Daniel Vetter , Robin Murphy , Marek Szyprowski Content-Type: text/plain; charset=UTF-8 Sender: linux-media-owner@vger.kernel.org List-ID: On Thu, Jan 29, 2015 at 10:47 AM, Russell King - ARM Linux wrote: > On Thu, Jan 29, 2015 at 09:00:11PM +0530, Sumit Semwal wrote: >> So, short answer is, it is left to the exporter to decide. The dma-buf >> framework should not even attempt to decide or enforce any of the >> above. >> >> At each dma_buf_attach(), there's a callback to the exporter, where >> the exporter can decide, if it intends to handle these kind of cases, >> on the best way forward. >> >> The exporter might, for example, decide to migrate backing storage, > > That's a decision which the exporter can not take. Think about it... > > If subsystem Y has mapped the buffer, it could be accessing the buffer's > backing storage at the same time that subsystem Z tries to attach to the > buffer. The *theory* is that Y is map/unmap'ing the buffer around each use, so there will be some point where things could be migrated and remapped.. in practice, I am not sure that anyone is doing this yet. Probably it would be reasonable if a more restrictive subsystem tried to attach after the buffer was already allocated and mapped in a way that don't meet the new constraints, then -EBUSY. But from a quick look it seems like there needs to be a slight fixup to not return 0 if calc_constraints() fails.. > Once the buffer has been exported to another user, the exporter has > effectively lost control over mediating accesses to that buffer. > > All that it can do with the way the dma-buf API is today is to allocate > a _different_ scatter list pointing at the same backing storage which > satisfies the segment size and number of segments, etc. > > There's also another issue which you haven't addressed. What if several > attachments result in lowering max_segment_size and max_segment_count > such that: > > max_segment_size * max_segment_count < dmabuf->size > > but individually, the attachments allow dmabuf->size to be represented > as a scatterlist? Quite possibly for some of these edge some of cases, some of the dma-buf exporters are going to need to get more clever (ie. hand off different scatterlists to different clients). Although I think by far the two common cases will be "I can support anything via an iommu/mmu" and "I need phys contig". But that isn't an issue w/ dma-buf itself, so much as it is an issue w/ drivers. I guess there would be more interest in fixing up drivers when actual hw comes along that needs it.. BR, -R > If an exporter were to take notice of the max_segment_size and > max_segment_count, the resulting buffer is basically unrepresentable > as a scatterlist. > >> > Please consider the possible sequences of use (such as the scenario >> > above) when creating or augmenting an API. >> > >> >> I tried to think of the scenarios I could think of, but If you still >> feel this approach doesn't help with your concerns, I'll graciously >> accept advice to improve it. > > See the new one above :) > > -- > FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up > according to speedtest.net. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ie0-f176.google.com (mail-ie0-f176.google.com [209.85.223.176]) by kanga.kvack.org (Postfix) with ESMTP id AB4236B0038 for ; Thu, 29 Jan 2015 13:52:11 -0500 (EST) Received: by mail-ie0-f176.google.com with SMTP id rd18so37202473iec.7 for ; Thu, 29 Jan 2015 10:52:11 -0800 (PST) Received: from mail-ie0-x22b.google.com (mail-ie0-x22b.google.com. [2607:f8b0:4001:c03::22b]) by mx.google.com with ESMTPS id j7si1900023igx.15.2015.01.29.10.52.10 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 29 Jan 2015 10:52:10 -0800 (PST) Received: by mail-ie0-f171.google.com with SMTP id tr6so37490201ieb.2 for ; Thu, 29 Jan 2015 10:52:09 -0800 (PST) MIME-Version: 1.0 In-Reply-To: <20150129154718.GB26493@n2100.arm.linux.org.uk> References: <1422347154-15258-1-git-send-email-sumit.semwal@linaro.org> <1422347154-15258-2-git-send-email-sumit.semwal@linaro.org> <20150129143908.GA26493@n2100.arm.linux.org.uk> <20150129154718.GB26493@n2100.arm.linux.org.uk> Date: Thu, 29 Jan 2015 13:52:09 -0500 Message-ID: Subject: Re: [RFCv3 2/2] dma-buf: add helpers for sharing attacher constraints with dma-parms From: Rob Clark Content-Type: text/plain; charset=UTF-8 Sender: owner-linux-mm@kvack.org List-ID: To: Russell King - ARM Linux Cc: Sumit Semwal , LKML , "linux-media@vger.kernel.org" , DRI mailing list , Linaro MM SIG Mailman List , "linux-arm-kernel@lists.infradead.org" , "linux-mm@kvack.org" , Linaro Kernel Mailman List , Tomasz Stanislawski , Daniel Vetter , Robin Murphy , Marek Szyprowski On Thu, Jan 29, 2015 at 10:47 AM, Russell King - ARM Linux wrote: > On Thu, Jan 29, 2015 at 09:00:11PM +0530, Sumit Semwal wrote: >> So, short answer is, it is left to the exporter to decide. The dma-buf >> framework should not even attempt to decide or enforce any of the >> above. >> >> At each dma_buf_attach(), there's a callback to the exporter, where >> the exporter can decide, if it intends to handle these kind of cases, >> on the best way forward. >> >> The exporter might, for example, decide to migrate backing storage, > > That's a decision which the exporter can not take. Think about it... > > If subsystem Y has mapped the buffer, it could be accessing the buffer's > backing storage at the same time that subsystem Z tries to attach to the > buffer. The *theory* is that Y is map/unmap'ing the buffer around each use, so there will be some point where things could be migrated and remapped.. in practice, I am not sure that anyone is doing this yet. Probably it would be reasonable if a more restrictive subsystem tried to attach after the buffer was already allocated and mapped in a way that don't meet the new constraints, then -EBUSY. But from a quick look it seems like there needs to be a slight fixup to not return 0 if calc_constraints() fails.. > Once the buffer has been exported to another user, the exporter has > effectively lost control over mediating accesses to that buffer. > > All that it can do with the way the dma-buf API is today is to allocate > a _different_ scatter list pointing at the same backing storage which > satisfies the segment size and number of segments, etc. > > There's also another issue which you haven't addressed. What if several > attachments result in lowering max_segment_size and max_segment_count > such that: > > max_segment_size * max_segment_count < dmabuf->size > > but individually, the attachments allow dmabuf->size to be represented > as a scatterlist? Quite possibly for some of these edge some of cases, some of the dma-buf exporters are going to need to get more clever (ie. hand off different scatterlists to different clients). Although I think by far the two common cases will be "I can support anything via an iommu/mmu" and "I need phys contig". But that isn't an issue w/ dma-buf itself, so much as it is an issue w/ drivers. I guess there would be more interest in fixing up drivers when actual hw comes along that needs it.. BR, -R > If an exporter were to take notice of the max_segment_size and > max_segment_count, the resulting buffer is basically unrepresentable > as a scatterlist. > >> > Please consider the possible sequences of use (such as the scenario >> > above) when creating or augmenting an API. >> > >> >> I tried to think of the scenarios I could think of, but If you still >> feel this approach doesn't help with your concerns, I'll graciously >> accept advice to improve it. > > See the new one above :) > > -- > FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up > according to speedtest.net. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 From: robdclark@gmail.com (Rob Clark) Date: Thu, 29 Jan 2015 13:52:09 -0500 Subject: [RFCv3 2/2] dma-buf: add helpers for sharing attacher constraints with dma-parms In-Reply-To: <20150129154718.GB26493@n2100.arm.linux.org.uk> References: <1422347154-15258-1-git-send-email-sumit.semwal@linaro.org> <1422347154-15258-2-git-send-email-sumit.semwal@linaro.org> <20150129143908.GA26493@n2100.arm.linux.org.uk> <20150129154718.GB26493@n2100.arm.linux.org.uk> Message-ID: To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Thu, Jan 29, 2015 at 10:47 AM, Russell King - ARM Linux wrote: > On Thu, Jan 29, 2015 at 09:00:11PM +0530, Sumit Semwal wrote: >> So, short answer is, it is left to the exporter to decide. The dma-buf >> framework should not even attempt to decide or enforce any of the >> above. >> >> At each dma_buf_attach(), there's a callback to the exporter, where >> the exporter can decide, if it intends to handle these kind of cases, >> on the best way forward. >> >> The exporter might, for example, decide to migrate backing storage, > > That's a decision which the exporter can not take. Think about it... > > If subsystem Y has mapped the buffer, it could be accessing the buffer's > backing storage at the same time that subsystem Z tries to attach to the > buffer. The *theory* is that Y is map/unmap'ing the buffer around each use, so there will be some point where things could be migrated and remapped.. in practice, I am not sure that anyone is doing this yet. Probably it would be reasonable if a more restrictive subsystem tried to attach after the buffer was already allocated and mapped in a way that don't meet the new constraints, then -EBUSY. But from a quick look it seems like there needs to be a slight fixup to not return 0 if calc_constraints() fails.. > Once the buffer has been exported to another user, the exporter has > effectively lost control over mediating accesses to that buffer. > > All that it can do with the way the dma-buf API is today is to allocate > a _different_ scatter list pointing at the same backing storage which > satisfies the segment size and number of segments, etc. > > There's also another issue which you haven't addressed. What if several > attachments result in lowering max_segment_size and max_segment_count > such that: > > max_segment_size * max_segment_count < dmabuf->size > > but individually, the attachments allow dmabuf->size to be represented > as a scatterlist? Quite possibly for some of these edge some of cases, some of the dma-buf exporters are going to need to get more clever (ie. hand off different scatterlists to different clients). Although I think by far the two common cases will be "I can support anything via an iommu/mmu" and "I need phys contig". But that isn't an issue w/ dma-buf itself, so much as it is an issue w/ drivers. I guess there would be more interest in fixing up drivers when actual hw comes along that needs it.. BR, -R > If an exporter were to take notice of the max_segment_size and > max_segment_count, the resulting buffer is basically unrepresentable > as a scatterlist. > >> > Please consider the possible sequences of use (such as the scenario >> > above) when creating or augmenting an API. >> > >> >> I tried to think of the scenarios I could think of, but If you still >> feel this approach doesn't help with your concerns, I'll graciously >> accept advice to improve it. > > See the new one above :) > > -- > FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up > according to speedtest.net. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Rob Clark Subject: Re: [RFCv3 2/2] dma-buf: add helpers for sharing attacher constraints with dma-parms Date: Thu, 29 Jan 2015 13:52:09 -0500 Message-ID: References: <1422347154-15258-1-git-send-email-sumit.semwal@linaro.org> <1422347154-15258-2-git-send-email-sumit.semwal@linaro.org> <20150129143908.GA26493@n2100.arm.linux.org.uk> <20150129154718.GB26493@n2100.arm.linux.org.uk> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Received: from mail-ie0-f174.google.com (mail-ie0-f174.google.com [209.85.223.174]) by gabe.freedesktop.org (Postfix) with ESMTP id 557016E799 for ; Thu, 29 Jan 2015 10:52:10 -0800 (PST) Received: by mail-ie0-f174.google.com with SMTP id vy18so37554439iec.5 for ; Thu, 29 Jan 2015 10:52:09 -0800 (PST) In-Reply-To: <20150129154718.GB26493@n2100.arm.linux.org.uk> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To: Russell King - ARM Linux Cc: Linaro Kernel Mailman List , Robin Murphy , LKML , DRI mailing list , Linaro MM SIG Mailman List , "linux-mm@kvack.org" , Marek Szyprowski , Tomasz Stanislawski , "linux-arm-kernel@lists.infradead.org" , "linux-media@vger.kernel.org" List-Id: dri-devel@lists.freedesktop.org T24gVGh1LCBKYW4gMjksIDIwMTUgYXQgMTA6NDcgQU0sIFJ1c3NlbGwgS2luZyAtIEFSTSBMaW51 eAo8bGludXhAYXJtLmxpbnV4Lm9yZy51az4gd3JvdGU6Cj4gT24gVGh1LCBKYW4gMjksIDIwMTUg YXQgMDk6MDA6MTFQTSArMDUzMCwgU3VtaXQgU2Vtd2FsIHdyb3RlOgo+PiBTbywgc2hvcnQgYW5z d2VyIGlzLCBpdCBpcyBsZWZ0IHRvIHRoZSBleHBvcnRlciB0byBkZWNpZGUuIFRoZSBkbWEtYnVm Cj4+IGZyYW1ld29yayBzaG91bGQgbm90IGV2ZW4gYXR0ZW1wdCB0byBkZWNpZGUgb3IgZW5mb3Jj ZSBhbnkgb2YgdGhlCj4+IGFib3ZlLgo+Pgo+PiBBdCBlYWNoIGRtYV9idWZfYXR0YWNoKCksIHRo ZXJlJ3MgYSBjYWxsYmFjayB0byB0aGUgZXhwb3J0ZXIsIHdoZXJlCj4+IHRoZSBleHBvcnRlciBj YW4gZGVjaWRlLCBpZiBpdCBpbnRlbmRzIHRvIGhhbmRsZSB0aGVzZSBraW5kIG9mIGNhc2VzLAo+ PiBvbiB0aGUgYmVzdCB3YXkgZm9yd2FyZC4KPj4KPj4gVGhlIGV4cG9ydGVyIG1pZ2h0LCBmb3Ig ZXhhbXBsZSwgZGVjaWRlIHRvIG1pZ3JhdGUgYmFja2luZyBzdG9yYWdlLAo+Cj4gVGhhdCdzIGEg ZGVjaXNpb24gd2hpY2ggdGhlIGV4cG9ydGVyIGNhbiBub3QgdGFrZS4gIFRoaW5rIGFib3V0IGl0 Li4uCj4KPiBJZiBzdWJzeXN0ZW0gWSBoYXMgbWFwcGVkIHRoZSBidWZmZXIsIGl0IGNvdWxkIGJl IGFjY2Vzc2luZyB0aGUgYnVmZmVyJ3MKPiBiYWNraW5nIHN0b3JhZ2UgYXQgdGhlIHNhbWUgdGlt ZSB0aGF0IHN1YnN5c3RlbSBaIHRyaWVzIHRvIGF0dGFjaCB0byB0aGUKPiBidWZmZXIuCgpUaGUg KnRoZW9yeSogaXMgdGhhdCBZIGlzIG1hcC91bm1hcCdpbmcgdGhlIGJ1ZmZlciBhcm91bmQgZWFj aCB1c2UsIHNvCnRoZXJlIHdpbGwgYmUgc29tZSBwb2ludCB3aGVyZSB0aGluZ3MgY291bGQgYmUg bWlncmF0ZWQgYW5kIHJlbWFwcGVkLi4KaW4gcHJhY3RpY2UsIEkgYW0gbm90IHN1cmUgdGhhdCBh bnlvbmUgaXMgZG9pbmcgdGhpcyB5ZXQuCgpQcm9iYWJseSBpdCB3b3VsZCBiZSByZWFzb25hYmxl IGlmIGEgbW9yZSByZXN0cmljdGl2ZSBzdWJzeXN0ZW0gdHJpZWQKdG8gYXR0YWNoIGFmdGVyIHRo ZSBidWZmZXIgd2FzIGFscmVhZHkgYWxsb2NhdGVkIGFuZCBtYXBwZWQgaW4gYSB3YXkKdGhhdCBk b24ndCBtZWV0IHRoZSBuZXcgY29uc3RyYWludHMsIHRoZW4gLUVCVVNZLgoKQnV0IGZyb20gYSBx dWljayBsb29rIGl0IHNlZW1zIGxpa2UgdGhlcmUgbmVlZHMgdG8gYmUgYSBzbGlnaHQgZml4dXAK dG8gbm90IHJldHVybiAwIGlmIGNhbGNfY29uc3RyYWludHMoKSBmYWlscy4uCgo+IE9uY2UgdGhl IGJ1ZmZlciBoYXMgYmVlbiBleHBvcnRlZCB0byBhbm90aGVyIHVzZXIsIHRoZSBleHBvcnRlciBo YXMKPiBlZmZlY3RpdmVseSBsb3N0IGNvbnRyb2wgb3ZlciBtZWRpYXRpbmcgYWNjZXNzZXMgdG8g dGhhdCBidWZmZXIuCj4KPiBBbGwgdGhhdCBpdCBjYW4gZG8gd2l0aCB0aGUgd2F5IHRoZSBkbWEt YnVmIEFQSSBpcyB0b2RheSBpcyB0byBhbGxvY2F0ZQo+IGEgX2RpZmZlcmVudF8gc2NhdHRlciBs aXN0IHBvaW50aW5nIGF0IHRoZSBzYW1lIGJhY2tpbmcgc3RvcmFnZSB3aGljaAo+IHNhdGlzZmll cyB0aGUgc2VnbWVudCBzaXplIGFuZCBudW1iZXIgb2Ygc2VnbWVudHMsIGV0Yy4KPgo+IFRoZXJl J3MgYWxzbyBhbm90aGVyIGlzc3VlIHdoaWNoIHlvdSBoYXZlbid0IGFkZHJlc3NlZC4gIFdoYXQg aWYgc2V2ZXJhbAo+IGF0dGFjaG1lbnRzIHJlc3VsdCBpbiBsb3dlcmluZyBtYXhfc2VnbWVudF9z aXplIGFuZCBtYXhfc2VnbWVudF9jb3VudAo+IHN1Y2ggdGhhdDoKPgo+ICAgICAgICAgbWF4X3Nl Z21lbnRfc2l6ZSAqIG1heF9zZWdtZW50X2NvdW50IDwgZG1hYnVmLT5zaXplCj4KPiBidXQgaW5k aXZpZHVhbGx5LCB0aGUgYXR0YWNobWVudHMgYWxsb3cgZG1hYnVmLT5zaXplIHRvIGJlIHJlcHJl c2VudGVkCj4gYXMgYSBzY2F0dGVybGlzdD8KClF1aXRlIHBvc3NpYmx5IGZvciBzb21lIG9mIHRo ZXNlIGVkZ2Ugc29tZSBvZiBjYXNlcywgc29tZSBvZiB0aGUKZG1hLWJ1ZiBleHBvcnRlcnMgYXJl IGdvaW5nIHRvIG5lZWQgdG8gZ2V0IG1vcmUgY2xldmVyIChpZS4gaGFuZCBvZmYKZGlmZmVyZW50 IHNjYXR0ZXJsaXN0cyB0byBkaWZmZXJlbnQgY2xpZW50cykuICBBbHRob3VnaCBJIHRoaW5rIGJ5 IGZhcgp0aGUgdHdvIGNvbW1vbiBjYXNlcyB3aWxsIGJlICJJIGNhbiBzdXBwb3J0IGFueXRoaW5n IHZpYSBhbiBpb21tdS9tbXUiCmFuZCAiSSBuZWVkIHBoeXMgY29udGlnIi4KCkJ1dCB0aGF0IGlz bid0IGFuIGlzc3VlIHcvIGRtYS1idWYgaXRzZWxmLCBzbyBtdWNoIGFzIGl0IGlzIGFuIGlzc3Vl CncvIGRyaXZlcnMuICBJIGd1ZXNzIHRoZXJlIHdvdWxkIGJlIG1vcmUgaW50ZXJlc3QgaW4gZml4 aW5nIHVwIGRyaXZlcnMKd2hlbiBhY3R1YWwgaHcgY29tZXMgYWxvbmcgdGhhdCBuZWVkcyBpdC4u CgpCUiwKLVIKCj4gSWYgYW4gZXhwb3J0ZXIgd2VyZSB0byB0YWtlIG5vdGljZSBvZiB0aGUgbWF4 X3NlZ21lbnRfc2l6ZSBhbmQKPiBtYXhfc2VnbWVudF9jb3VudCwgdGhlIHJlc3VsdGluZyBidWZm ZXIgaXMgYmFzaWNhbGx5IHVucmVwcmVzZW50YWJsZQo+IGFzIGEgc2NhdHRlcmxpc3QuCj4KPj4g PiBQbGVhc2UgY29uc2lkZXIgdGhlIHBvc3NpYmxlIHNlcXVlbmNlcyBvZiB1c2UgKHN1Y2ggYXMg dGhlIHNjZW5hcmlvCj4+ID4gYWJvdmUpIHdoZW4gY3JlYXRpbmcgb3IgYXVnbWVudGluZyBhbiBB UEkuCj4+ID4KPj4KPj4gSSB0cmllZCB0byB0aGluayBvZiB0aGUgc2NlbmFyaW9zIEkgY291bGQg dGhpbmsgb2YsIGJ1dCBJZiB5b3Ugc3RpbGwKPj4gZmVlbCB0aGlzIGFwcHJvYWNoIGRvZXNuJ3Qg aGVscCB3aXRoIHlvdXIgY29uY2VybnMsIEknbGwgZ3JhY2lvdXNseQo+PiBhY2NlcHQgYWR2aWNl IHRvIGltcHJvdmUgaXQuCj4KPiBTZWUgdGhlIG5ldyBvbmUgYWJvdmUgOikKPgo+IC0tCj4gRlRU QyBicm9hZGJhbmQgZm9yIDAuOG1pbGUgbGluZTogY3VycmVudGx5IGF0IDEwLjVNYnBzIGRvd24g NDAwa2JwcyB1cAo+IGFjY29yZGluZyB0byBzcGVlZHRlc3QubmV0LgpfX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpkcmktZGV2ZWwgbWFpbGluZyBsaXN0CmRy aS1kZXZlbEBsaXN0cy5mcmVlZGVza3RvcC5vcmcKaHR0cDovL2xpc3RzLmZyZWVkZXNrdG9wLm9y Zy9tYWlsbWFuL2xpc3RpbmZvL2RyaS1kZXZlbAo=