From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 553AFC07548 for ; Thu, 10 Sep 2020 10:54:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1191620C09 for ; Thu, 10 Sep 2020 10:54:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730165AbgIJKyY (ORCPT ); Thu, 10 Sep 2020 06:54:24 -0400 Received: from foss.arm.com ([217.140.110.172]:32832 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730328AbgIJKvY (ORCPT ); Thu, 10 Sep 2020 06:51:24 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8D34D1063; Thu, 10 Sep 2020 03:51:18 -0700 (PDT) Received: from [10.163.71.250] (unknown [10.163.71.250]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B0CE13F68F; Thu, 10 Sep 2020 03:51:15 -0700 (PDT) Subject: Re: [PATCH] arm64/mm: add fallback option to allocate virtually contiguous memory To: Steven Price , Sudarshan Rajagopalan , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Catalin Marinas , Will Deacon , Mark Rutland , Logan Gunthorpe , David Hildenbrand , Andrew Morton References: <01010174769e2b68-a6f3768e-aef8-43c7-b357-a8cb1e17d3eb-000000@us-west-2.amazonses.com> From: Anshuman Khandual Message-ID: <145c57a3-1753-3ff8-4353-3bf7bac0b7de@arm.com> Date: Thu, 10 Sep 2020 16:20:42 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/10/2020 01:57 PM, Steven Price wrote: > On 10/09/2020 07:05, Sudarshan Rajagopalan wrote: >> When section mappings are enabled, we allocate vmemmap pages from physically >> continuous memory of size PMD_SZIE using vmemmap_alloc_block_buf(). Section >> mappings are good to reduce TLB pressure. But when system is highly fragmented >> and memory blocks are being hot-added at runtime, its possible that such >> physically continuous memory allocations can fail. Rather than failing the >> memory hot-add procedure, add a fallback option to allocate vmemmap pages from >> discontinuous pages using vmemmap_populate_basepages(). >> >> Signed-off-by: Sudarshan Rajagopalan >> Cc: Catalin Marinas >> Cc: Will Deacon >> Cc: Anshuman Khandual >> Cc: Mark Rutland >> Cc: Logan Gunthorpe >> Cc: David Hildenbrand >> Cc: Andrew Morton >> Cc: Steven Price >> --- >>   arch/arm64/mm/mmu.c | 15 ++++++++++++--- >>   1 file changed, 12 insertions(+), 3 deletions(-) >> >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >> index 75df62f..a46c7d4 100644 >> --- a/arch/arm64/mm/mmu.c >> +++ b/arch/arm64/mm/mmu.c >> @@ -1100,6 +1100,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, >>       p4d_t *p4dp; >>       pud_t *pudp; >>       pmd_t *pmdp; >> +    int ret = 0; >>         do { >>           next = pmd_addr_end(addr, end); >> @@ -1121,15 +1122,23 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, >>               void *p = NULL; >>                 p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap); >> -            if (!p) >> -                return -ENOMEM; >> +            if (!p) { >> +#ifdef CONFIG_MEMORY_HOTPLUG >> +                vmemmap_free(start, end, altmap); >> +#endif >> +                ret = -ENOMEM; >> +                break; >> +            } >>                 pmd_set_huge(pmdp, __pa(p), __pgprot(PROT_SECT_NORMAL)); >>           } else >>               vmemmap_verify((pte_t *)pmdp, node, addr, next); >>       } while (addr = next, addr != end); >>   -    return 0; >> +    if (ret) >> +        return vmemmap_populate_basepages(start, end, node, altmap); >> +    else >> +        return ret; > > Style comment: I find this usage of 'ret' confusing. When we assign -ENOMEM above that is never actually the return value of the function (in that case vmemmap_populate_basepages() provides the actual return value). Right. > > Also the "return ret" is misleading since we know by that point that ret==0 (and the 'else' is redundant). Right. > > Can you not just move the call to vmemmap_populate_basepages() up to just after the (possible) vmemmap_free() call and remove the 'ret' variable? > > AFAICT the call to vmemmap_free() also doesn't need the #ifdef as the function is a no-op if CONFIG_MEMORY_HOTPLUG isn't set. I also feel you Right, CONFIG_MEMORY_HOTPLUG is not required. need at least a comment to explain Anshuman's point that it looks like you're freeing an unmapped area. Although if I'm reading the code correctly it seems like the unmapped area will just be skipped. Proposed vmemmap_free() attempts to free the entire requested vmemmap range [start, end] when an intermediate PMD entry can not be allocated. Hence even if vmemap_free() could skip an unmapped area (will double check on that), it unnecessarily goes through large sections of unmapped range, which could not have been mapped. So, basically there could be two different methods for doing this fallback. 1. Call vmemmap_populate_basepages() for sections when PMD_SIZE allocation fails - vmemmap_free() need not be called 2. Abort at the first instance of PMD_SIZE allocation failure - Call vmemmap_free() to unmap all sections mapped till that point - Call vmemmap_populate_basepages() to map the entire request section The proposed patch tried to mix both approaches. Regardless, the first approach here seems better and is the case in vmemmap_populate_hugepages() implementation on x86 as well. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 824A2C43461 for ; Thu, 10 Sep 2020 10:53:17 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E669C20C09 for ; Thu, 10 Sep 2020 10:53:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="ZoXA+/sK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E669C20C09 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=d4pGziVsVWrH9jvnC4M27Z9jR4XDFTOHX3H6Y2gNk2U=; b=ZoXA+/sKIHGtvs10eloX/LeNH c0fY+iYSS1VyTP7psStGmoLF3v4a63ANRXLo5UJYO/1K2meLvuP9yD2+ZWGZbEnyTCDtVsmnEKZkp p7t+WTkN7H8Z3T6bDGNycZ3X6tn/CQ4k3OBaubdz2BmqwVykjPTPww5lvVFdOw44GiiESuW7zi6qh 1I3W5pqvNRKAQvE0ux3dUGBAL4MXaZlJPwSU9fx220KhXWGGqkPi6QCnhM5fF9Tz2sNT6gkHflM03 0n2bj5iCVm1REyw3lLZJGNoATzSOUURQ3QLfIC0NrMwdQYSRvV2ZvGxdYm6gyvyHDlPauYs41ro+D /eGqbKSrg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kGKAo-0006lF-W1; Thu, 10 Sep 2020 10:51:27 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kGKAk-0006iu-1q for linux-arm-kernel@lists.infradead.org; Thu, 10 Sep 2020 10:51:24 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8D34D1063; Thu, 10 Sep 2020 03:51:18 -0700 (PDT) Received: from [10.163.71.250] (unknown [10.163.71.250]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B0CE13F68F; Thu, 10 Sep 2020 03:51:15 -0700 (PDT) Subject: Re: [PATCH] arm64/mm: add fallback option to allocate virtually contiguous memory To: Steven Price , Sudarshan Rajagopalan , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org References: <01010174769e2b68-a6f3768e-aef8-43c7-b357-a8cb1e17d3eb-000000@us-west-2.amazonses.com> From: Anshuman Khandual Message-ID: <145c57a3-1753-3ff8-4353-3bf7bac0b7de@arm.com> Date: Thu, 10 Sep 2020 16:20:42 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200910_065122_426588_DCB5BDB1 X-CRM114-Status: GOOD ( 26.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Will Deacon , David Hildenbrand , Catalin Marinas , Andrew Morton , Logan Gunthorpe Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org CgpPbiAwOS8xMC8yMDIwIDAxOjU3IFBNLCBTdGV2ZW4gUHJpY2Ugd3JvdGU6Cj4gT24gMTAvMDkv MjAyMCAwNzowNSwgU3VkYXJzaGFuIFJhamFnb3BhbGFuIHdyb3RlOgo+PiBXaGVuIHNlY3Rpb24g bWFwcGluZ3MgYXJlIGVuYWJsZWQsIHdlIGFsbG9jYXRlIHZtZW1tYXAgcGFnZXMgZnJvbSBwaHlz aWNhbGx5Cj4+IGNvbnRpbnVvdXMgbWVtb3J5IG9mIHNpemUgUE1EX1NaSUUgdXNpbmcgdm1lbW1h cF9hbGxvY19ibG9ja19idWYoKS4gU2VjdGlvbgo+PiBtYXBwaW5ncyBhcmUgZ29vZCB0byByZWR1 Y2UgVExCIHByZXNzdXJlLiBCdXQgd2hlbiBzeXN0ZW0gaXMgaGlnaGx5IGZyYWdtZW50ZWQKPj4g YW5kIG1lbW9yeSBibG9ja3MgYXJlIGJlaW5nIGhvdC1hZGRlZCBhdCBydW50aW1lLCBpdHMgcG9z c2libGUgdGhhdCBzdWNoCj4+IHBoeXNpY2FsbHkgY29udGludW91cyBtZW1vcnkgYWxsb2NhdGlv bnMgY2FuIGZhaWwuIFJhdGhlciB0aGFuIGZhaWxpbmcgdGhlCj4+IG1lbW9yeSBob3QtYWRkIHBy b2NlZHVyZSwgYWRkIGEgZmFsbGJhY2sgb3B0aW9uIHRvIGFsbG9jYXRlIHZtZW1tYXAgcGFnZXMg ZnJvbQo+PiBkaXNjb250aW51b3VzIHBhZ2VzIHVzaW5nIHZtZW1tYXBfcG9wdWxhdGVfYmFzZXBh Z2VzKCkuCj4+Cj4+IFNpZ25lZC1vZmYtYnk6IFN1ZGFyc2hhbiBSYWphZ29wYWxhbiA8c3VkYXJh amFAY29kZWF1cm9yYS5vcmc+Cj4+IENjOiBDYXRhbGluIE1hcmluYXMgPGNhdGFsaW4ubWFyaW5h c0Bhcm0uY29tPgo+PiBDYzogV2lsbCBEZWFjb24gPHdpbGxAa2VybmVsLm9yZz4KPj4gQ2M6IEFu c2h1bWFuIEtoYW5kdWFsIDxhbnNodW1hbi5raGFuZHVhbEBhcm0uY29tPgo+PiBDYzogTWFyayBS dXRsYW5kIDxtYXJrLnJ1dGxhbmRAYXJtLmNvbT4KPj4gQ2M6IExvZ2FuIEd1bnRob3JwZSA8bG9n YW5nQGRlbHRhdGVlLmNvbT4KPj4gQ2M6IERhdmlkIEhpbGRlbmJyYW5kIDxkYXZpZEByZWRoYXQu Y29tPgo+PiBDYzogQW5kcmV3IE1vcnRvbiA8YWtwbUBsaW51eC1mb3VuZGF0aW9uLm9yZz4KPj4g Q2M6IFN0ZXZlbiBQcmljZSA8c3RldmVuLnByaWNlQGFybS5jb20+Cj4+IC0tLQo+PiDCoCBhcmNo L2FybTY0L21tL21tdS5jIHwgMTUgKysrKysrKysrKysrLS0tCj4+IMKgIDEgZmlsZSBjaGFuZ2Vk LCAxMiBpbnNlcnRpb25zKCspLCAzIGRlbGV0aW9ucygtKQo+Pgo+PiBkaWZmIC0tZ2l0IGEvYXJj aC9hcm02NC9tbS9tbXUuYyBiL2FyY2gvYXJtNjQvbW0vbW11LmMKPj4gaW5kZXggNzVkZjYyZi4u YTQ2YzdkNCAxMDA2NDQKPj4gLS0tIGEvYXJjaC9hcm02NC9tbS9tbXUuYwo+PiArKysgYi9hcmNo L2FybTY0L21tL21tdS5jCj4+IEBAIC0xMTAwLDYgKzExMDAsNyBAQCBpbnQgX19tZW1pbml0IHZt ZW1tYXBfcG9wdWxhdGUodW5zaWduZWQgbG9uZyBzdGFydCwgdW5zaWduZWQgbG9uZyBlbmQsIGlu dCBub2RlLAo+PiDCoMKgwqDCoMKgIHA0ZF90ICpwNGRwOwo+PiDCoMKgwqDCoMKgIHB1ZF90ICpw dWRwOwo+PiDCoMKgwqDCoMKgIHBtZF90ICpwbWRwOwo+PiArwqDCoMKgIGludCByZXQgPSAwOwo+ PiDCoCDCoMKgwqDCoMKgIGRvIHsKPj4gwqDCoMKgwqDCoMKgwqDCoMKgIG5leHQgPSBwbWRfYWRk cl9lbmQoYWRkciwgZW5kKTsKPj4gQEAgLTExMjEsMTUgKzExMjIsMjMgQEAgaW50IF9fbWVtaW5p dCB2bWVtbWFwX3BvcHVsYXRlKHVuc2lnbmVkIGxvbmcgc3RhcnQsIHVuc2lnbmVkIGxvbmcgZW5k LCBpbnQgbm9kZSwKPj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdm9pZCAqcCA9IE5VTEw7 Cj4+IMKgIMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHAgPSB2bWVtbWFwX2FsbG9jX2Jsb2Nr X2J1ZihQTURfU0laRSwgbm9kZSwgYWx0bWFwKTsKPj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg aWYgKCFwKQo+PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiAtRU5PTUVN Owo+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAoIXApIHsKPj4gKyNpZmRlZiBDT05GSUdf TUVNT1JZX0hPVFBMVUcKPj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB2bWVtbWFw X2ZyZWUoc3RhcnQsIGVuZCwgYWx0bWFwKTsKPj4gKyNlbmRpZgo+PiArwqDCoMKgwqDCoMKgwqDC oMKgwqDCoMKgwqDCoMKgIHJldCA9IC1FTk9NRU07Cj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKg wqDCoMKgwqAgYnJlYWs7Cj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIH0KPj4gwqAgwqDCoMKg wqDCoMKgwqDCoMKgwqDCoMKgwqAgcG1kX3NldF9odWdlKHBtZHAsIF9fcGEocCksIF9fcGdwcm90 KFBST1RfU0VDVF9OT1JNQUwpKTsKPj4gwqDCoMKgwqDCoMKgwqDCoMKgIH0gZWxzZQo+PiDCoMKg wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB2bWVtbWFwX3ZlcmlmeSgocHRlX3QgKilwbWRwLCBub2Rl LCBhZGRyLCBuZXh0KTsKPj4gwqDCoMKgwqDCoCB9IHdoaWxlIChhZGRyID0gbmV4dCwgYWRkciAh PSBlbmQpOwo+PiDCoCAtwqDCoMKgIHJldHVybiAwOwo+PiArwqDCoMKgIGlmIChyZXQpCj4+ICvC oMKgwqDCoMKgwqDCoCByZXR1cm4gdm1lbW1hcF9wb3B1bGF0ZV9iYXNlcGFnZXMoc3RhcnQsIGVu ZCwgbm9kZSwgYWx0bWFwKTsKPj4gK8KgwqDCoCBlbHNlCj4+ICvCoMKgwqDCoMKgwqDCoCByZXR1 cm4gcmV0Owo+IAo+IFN0eWxlIGNvbW1lbnQ6IEkgZmluZCB0aGlzIHVzYWdlIG9mICdyZXQnIGNv bmZ1c2luZy4gV2hlbiB3ZSBhc3NpZ24gLUVOT01FTSBhYm92ZSB0aGF0IGlzIG5ldmVyIGFjdHVh bGx5IHRoZSByZXR1cm4gdmFsdWUgb2YgdGhlIGZ1bmN0aW9uIChpbiB0aGF0IGNhc2Ugdm1lbW1h cF9wb3B1bGF0ZV9iYXNlcGFnZXMoKSBwcm92aWRlcyB0aGUgYWN0dWFsIHJldHVybiB2YWx1ZSku CgpSaWdodC4KCj4gCj4gQWxzbyB0aGUgInJldHVybiByZXQiIGlzIG1pc2xlYWRpbmcgc2luY2Ug d2Uga25vdyBieSB0aGF0IHBvaW50IHRoYXQgcmV0PT0wIChhbmQgdGhlICdlbHNlJyBpcyByZWR1 bmRhbnQpLgoKUmlnaHQuCgo+IAo+IENhbiB5b3Ugbm90IGp1c3QgbW92ZSB0aGUgY2FsbCB0byB2 bWVtbWFwX3BvcHVsYXRlX2Jhc2VwYWdlcygpIHVwIHRvIGp1c3QgYWZ0ZXIgdGhlIChwb3NzaWJs ZSkgdm1lbW1hcF9mcmVlKCkgY2FsbCBhbmQgcmVtb3ZlIHRoZSAncmV0JyB2YXJpYWJsZT8KPiAK PiBBRkFJQ1QgdGhlIGNhbGwgdG8gdm1lbW1hcF9mcmVlKCkgYWxzbyBkb2Vzbid0IG5lZWQgdGhl ICNpZmRlZiBhcyB0aGUgZnVuY3Rpb24gaXMgYSBuby1vcCBpZiBDT05GSUdfTUVNT1JZX0hPVFBM VUcgaXNuJ3Qgc2V0LiBJIGFsc28gZmVlbCB5b3UgCgpSaWdodCwgQ09ORklHX01FTU9SWV9IT1RQ TFVHIGlzIG5vdCByZXF1aXJlZC4KCm5lZWQgYXQgbGVhc3QgYSBjb21tZW50IHRvIGV4cGxhaW4g QW5zaHVtYW4ncyBwb2ludCB0aGF0IGl0IGxvb2tzIGxpa2UgeW91J3JlIGZyZWVpbmcgYW4gdW5t YXBwZWQgYXJlYS4gQWx0aG91Z2ggaWYgSSdtIHJlYWRpbmcgdGhlIGNvZGUgY29ycmVjdGx5IGl0 IHNlZW1zIGxpa2UgdGhlIHVubWFwcGVkIGFyZWEgd2lsbCBqdXN0IGJlIHNraXBwZWQuClByb3Bv c2VkIHZtZW1tYXBfZnJlZSgpIGF0dGVtcHRzIHRvIGZyZWUgdGhlIGVudGlyZSByZXF1ZXN0ZWQg dm1lbW1hcCByYW5nZQpbc3RhcnQsIGVuZF0gd2hlbiBhbiBpbnRlcm1lZGlhdGUgUE1EIGVudHJ5 IGNhbiBub3QgYmUgYWxsb2NhdGVkLiBIZW5jZSBldmVuCmlmIHZtZW1hcF9mcmVlKCkgY291bGQg c2tpcCBhbiB1bm1hcHBlZCBhcmVhICh3aWxsIGRvdWJsZSBjaGVjayBvbiB0aGF0KSwgaXQKdW5u ZWNlc3NhcmlseSBnb2VzIHRocm91Z2ggbGFyZ2Ugc2VjdGlvbnMgb2YgdW5tYXBwZWQgcmFuZ2Us IHdoaWNoIGNvdWxkIG5vdApoYXZlIGJlZW4gbWFwcGVkLgoKU28sIGJhc2ljYWxseSB0aGVyZSBj b3VsZCBiZSB0d28gZGlmZmVyZW50IG1ldGhvZHMgZm9yIGRvaW5nIHRoaXMgZmFsbGJhY2suCgox LiBDYWxsIHZtZW1tYXBfcG9wdWxhdGVfYmFzZXBhZ2VzKCkgZm9yIHNlY3Rpb25zIHdoZW4gUE1E X1NJWkUgYWxsb2NhdGlvbiBmYWlscwoKCS0gdm1lbW1hcF9mcmVlKCkgbmVlZCBub3QgYmUgY2Fs bGVkCgoyLiBBYm9ydCBhdCB0aGUgZmlyc3QgaW5zdGFuY2Ugb2YgUE1EX1NJWkUgYWxsb2NhdGlv biBmYWlsdXJlCgoJLSBDYWxsIHZtZW1tYXBfZnJlZSgpIHRvIHVubWFwIGFsbCBzZWN0aW9ucyBt YXBwZWQgdGlsbCB0aGF0IHBvaW50CgktIENhbGwgdm1lbW1hcF9wb3B1bGF0ZV9iYXNlcGFnZXMo KSB0byBtYXAgdGhlIGVudGlyZSByZXF1ZXN0IHNlY3Rpb24KClRoZSBwcm9wb3NlZCBwYXRjaCB0 cmllZCB0byBtaXggYm90aCBhcHByb2FjaGVzLiBSZWdhcmRsZXNzLCB0aGUgZmlyc3QgYXBwcm9h Y2gKaGVyZSBzZWVtcyBiZXR0ZXIgYW5kIGlzIHRoZSBjYXNlIGluIHZtZW1tYXBfcG9wdWxhdGVf aHVnZXBhZ2VzKCkgaW1wbGVtZW50YXRpb24Kb24geDg2IGFzIHdlbGwuCgpfX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpsaW51eC1hcm0ta2VybmVsIG1haWxp bmcgbGlzdApsaW51eC1hcm0ta2VybmVsQGxpc3RzLmluZnJhZGVhZC5vcmcKaHR0cDovL2xpc3Rz LmluZnJhZGVhZC5vcmcvbWFpbG1hbi9saXN0aW5mby9saW51eC1hcm0ta2VybmVsCg==