From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21EFFC433FE for ; Wed, 19 Jan 2022 13:32:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354840AbiASNcU (ORCPT ); Wed, 19 Jan 2022 08:32:20 -0500 Received: from szxga01-in.huawei.com ([45.249.212.187]:16724 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354832AbiASNcR (ORCPT ); Wed, 19 Jan 2022 08:32:17 -0500 Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Jf60Y3G8kzZfHP; Wed, 19 Jan 2022 21:28:29 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 19 Jan 2022 21:32:15 +0800 Received: from [10.174.177.243] (10.174.177.243) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Wed, 19 Jan 2022 21:32:14 +0800 Message-ID: Date: Wed, 19 Jan 2022 21:32:13 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 Subject: Re: [PATCH v2 3/3] x86: Support huge vmalloc mappings Content-Language: en-US To: Nicholas Piggin , Andrew Morton , Jonathan Corbet , Dave Hansen , , , , , , CC: Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , "H. Peter Anvin" , Ingo Molnar , "Michael Ellerman" , Paul Mackerras , "Thomas Gleixner" , Will Deacon , Matthew Wilcox References: <20211227145903.187152-1-wangkefeng.wang@huawei.com> <20211227145903.187152-4-wangkefeng.wang@huawei.com> <70ff58bc-3a92-55c2-2da8-c5877af72e44@intel.com> <3858de1f-cdbc-ff52-2890-4254d0f48b0a@huawei.com> <31a75f95-6e6e-b640-2d95-08a95ea8cf51@intel.com> <1642472965.lgfksp6krp.astroid@bobo.none> <4488d39f-0698-7bfd-b81c-1e609821818f@intel.com> <1642565468.c0jax91tvn.astroid@bobo.none> From: Kefeng Wang In-Reply-To: <1642565468.c0jax91tvn.astroid@bobo.none> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggeme705-chm.china.huawei.com (10.1.199.101) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2022/1/19 12:17, Nicholas Piggin wrote: > Excerpts from Dave Hansen's message of January 19, 2022 3:28 am: >> On 1/17/22 6:46 PM, Nicholas Piggin wrote: >>>> This all sounds very fragile to me. Every time a new architecture would >>>> get added for huge vmalloc() support, the developer needs to know to go >>>> find that architecture's module_alloc() and add this flag. >>> This is documented in the Kconfig. >>> >>> # >>> # Archs that select this would be capable of PMD-sized vmaps (i.e., >>> # arch_vmap_pmd_supported() returns true), and they must make no assumptions >>> # that vmalloc memory is mapped with PAGE_SIZE ptes. The VM_NO_HUGE_VMAP flag >>> # can be used to prohibit arch-specific allocations from using hugepages to >>> # help with this (e.g., modules may require it). >>> # >>> config HAVE_ARCH_HUGE_VMALLOC >>> depends on HAVE_ARCH_HUGE_VMAP >>> bool >>> >>> Is it really fair to say it's *very* fragile? Surely it's reasonable to >>> read the (not very long) documentation ad understand the consequences for >>> the arch code before enabling it. >> Very fragile or not, I think folks are likely to get it wrong. It would >> be nice to have it default *everyone* to safe and slow and make *sure* > It's not safe to enable though. That's the problem. If it was just > modules then you'd have a point but it could be anything. > >> they go look at the architecture modules code itself before enabling >> this for modules. > This is required not just for modules for the whole arch code, it > has to be looked at and decided this will work. > >> Just from that Kconfig text, I don't think I'd know off the top of my >> head what do do for x86, or what code I needed to go touch. > You have to make sure arch/x86 makes no assumptions that vmalloc memory > is backed by PAGE_SIZE ptes. If you can't do that then you shouldn't > enable the option. The option can not explain it any more because any > arch could do anything with its mappings. The module code is an example, > not the recipe. Hi Nick, Dave and Christophe,thanks for your review,  a little confused,   I think, 1) for ppc/arm64 module_alloc(),  it must set VM_NO_HUGE_VMAP because the arch's set_memory_* funcitons can only support PAGE_SIZE mapping, due to the limit of apply_to_page_range(). 2) but for x86's module_alloc(), add VM_NO_HUGE_VMAP is to avoid fragmentation, x86's __change_page_attr functions will split the huge mapping. this flags is not a must. and the behavior above occurred when STRICT_MODULE_RWX enabled, so 1) add a unified function to set vm flags(suggested by Dave ) or 2) add vm flags with some comments to per-arch's module_alloc() are both acceptable, for the way of unified function ,  we could make this a default recipe with STRICT_MODULE_RWX, also make two more vm flags into it, eg, +unsigned long module_alloc_vm_flags(bool need_flush_reset_perms) +{ +       unsigned long vm_flags = VM_DEFER_KMEMLEAK; + +       if (need_flush_reset_perms) +               vm_flags |= VM_FLUSH_RESET_PERMS; +       /* +        * Modules use a single, large vmalloc(). Different permissions +        * are applied later and will fragment huge mappings or even +        * fails in set_memory_* on some architectures. Avoid using +        * huge pages for modules. +        */ +       if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) +               vm_flags |= VM_NO_HUGE_VMAP; + +       return vm_flags; +} then called each arch's module_alloc(). Any suggestion, many thanks. > > Thanks, > Nick > . From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2D5F2C433F5 for ; Wed, 19 Jan 2022 13:33:21 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4Jf6675S2Kz30QY for ; Thu, 20 Jan 2022 00:33:19 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=huawei.com (client-ip=45.249.212.187; helo=szxga01-in.huawei.com; envelope-from=wangkefeng.wang@huawei.com; receiver=) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4Jf65c4ltGz2ynM for ; Thu, 20 Jan 2022 00:32:51 +1100 (AEDT) Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Jf60Y3G8kzZfHP; Wed, 19 Jan 2022 21:28:29 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 19 Jan 2022 21:32:15 +0800 Received: from [10.174.177.243] (10.174.177.243) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Wed, 19 Jan 2022 21:32:14 +0800 Message-ID: Date: Wed, 19 Jan 2022 21:32:13 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 Subject: Re: [PATCH v2 3/3] x86: Support huge vmalloc mappings Content-Language: en-US To: Nicholas Piggin , Andrew Morton , Jonathan Corbet , Dave Hansen , , , , , , References: <20211227145903.187152-1-wangkefeng.wang@huawei.com> <20211227145903.187152-4-wangkefeng.wang@huawei.com> <70ff58bc-3a92-55c2-2da8-c5877af72e44@intel.com> <3858de1f-cdbc-ff52-2890-4254d0f48b0a@huawei.com> <31a75f95-6e6e-b640-2d95-08a95ea8cf51@intel.com> <1642472965.lgfksp6krp.astroid@bobo.none> <4488d39f-0698-7bfd-b81c-1e609821818f@intel.com> <1642565468.c0jax91tvn.astroid@bobo.none> From: Kefeng Wang In-Reply-To: <1642565468.c0jax91tvn.astroid@bobo.none> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggeme705-chm.china.huawei.com (10.1.199.101) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Matthew Wilcox , Catalin Marinas , Dave Hansen , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Paul Mackerras , Thomas Gleixner , Will Deacon Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On 2022/1/19 12:17, Nicholas Piggin wrote: > Excerpts from Dave Hansen's message of January 19, 2022 3:28 am: >> On 1/17/22 6:46 PM, Nicholas Piggin wrote: >>>> This all sounds very fragile to me. Every time a new architecture would >>>> get added for huge vmalloc() support, the developer needs to know to go >>>> find that architecture's module_alloc() and add this flag. >>> This is documented in the Kconfig. >>> >>> # >>> # Archs that select this would be capable of PMD-sized vmaps (i.e., >>> # arch_vmap_pmd_supported() returns true), and they must make no assumptions >>> # that vmalloc memory is mapped with PAGE_SIZE ptes. The VM_NO_HUGE_VMAP flag >>> # can be used to prohibit arch-specific allocations from using hugepages to >>> # help with this (e.g., modules may require it). >>> # >>> config HAVE_ARCH_HUGE_VMALLOC >>> depends on HAVE_ARCH_HUGE_VMAP >>> bool >>> >>> Is it really fair to say it's *very* fragile? Surely it's reasonable to >>> read the (not very long) documentation ad understand the consequences for >>> the arch code before enabling it. >> Very fragile or not, I think folks are likely to get it wrong. It would >> be nice to have it default *everyone* to safe and slow and make *sure* > It's not safe to enable though. That's the problem. If it was just > modules then you'd have a point but it could be anything. > >> they go look at the architecture modules code itself before enabling >> this for modules. > This is required not just for modules for the whole arch code, it > has to be looked at and decided this will work. > >> Just from that Kconfig text, I don't think I'd know off the top of my >> head what do do for x86, or what code I needed to go touch. > You have to make sure arch/x86 makes no assumptions that vmalloc memory > is backed by PAGE_SIZE ptes. If you can't do that then you shouldn't > enable the option. The option can not explain it any more because any > arch could do anything with its mappings. The module code is an example, > not the recipe. Hi Nick, Dave and Christophe,thanks for your review,  a little confused,   I think, 1) for ppc/arm64 module_alloc(),  it must set VM_NO_HUGE_VMAP because the arch's set_memory_* funcitons can only support PAGE_SIZE mapping, due to the limit of apply_to_page_range(). 2) but for x86's module_alloc(), add VM_NO_HUGE_VMAP is to avoid fragmentation, x86's __change_page_attr functions will split the huge mapping. this flags is not a must. and the behavior above occurred when STRICT_MODULE_RWX enabled, so 1) add a unified function to set vm flags(suggested by Dave ) or 2) add vm flags with some comments to per-arch's module_alloc() are both acceptable, for the way of unified function ,  we could make this a default recipe with STRICT_MODULE_RWX, also make two more vm flags into it, eg, +unsigned long module_alloc_vm_flags(bool need_flush_reset_perms) +{ +       unsigned long vm_flags = VM_DEFER_KMEMLEAK; + +       if (need_flush_reset_perms) +               vm_flags |= VM_FLUSH_RESET_PERMS; +       /* +        * Modules use a single, large vmalloc(). Different permissions +        * are applied later and will fragment huge mappings or even +        * fails in set_memory_* on some architectures. Avoid using +        * huge pages for modules. +        */ +       if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) +               vm_flags |= VM_NO_HUGE_VMAP; + +       return vm_flags; +} then called each arch's module_alloc(). Any suggestion, many thanks. > > Thanks, > Nick > . From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7DAFFC433F5 for ; Wed, 19 Jan 2022 13:33:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:From:References:CC:To:Subject: MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lZUsiFWLwNPsDKjvfuaCawtlVvjWpQsCCdN/mIPO7sY=; b=VUWfbc1iCJISXI SmsHmWaKfwMziGi+puTM3e10mxZtfSKtHARgEYKX1YyxLML7/qy/B848xDZ6DGw9pU3bHNT0MEhY4 ncMtjxdUTnp2SPGcRc5odmkVbf2whb2/IfDwAIu/lqkOB9U8o1r7r9orS5udpbxKSjxjQabZF9d7v llDoEXU0N/fFSSpwBa7SRk4NeqiMrjFZfXpUG4bsqsA8OYml5AMevDCTxu+MCc2OEYAsDAKZZymQG k/I4d0o63jzHy4Y0gihFiWVnUA3OkUOhnCBEVl9Z16RvVlvxToHuQDdhOwgMm7o3FR9fF7HFk/kRo CgwxXYovTV0CHoKovq8Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nAB4b-005dAk-8u; Wed, 19 Jan 2022 13:32:25 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nAB4X-005d6p-9j for linux-arm-kernel@lists.infradead.org; Wed, 19 Jan 2022 13:32:23 +0000 Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Jf60Y3G8kzZfHP; Wed, 19 Jan 2022 21:28:29 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 19 Jan 2022 21:32:15 +0800 Received: from [10.174.177.243] (10.174.177.243) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Wed, 19 Jan 2022 21:32:14 +0800 Message-ID: Date: Wed, 19 Jan 2022 21:32:13 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 Subject: Re: [PATCH v2 3/3] x86: Support huge vmalloc mappings Content-Language: en-US To: Nicholas Piggin , Andrew Morton , Jonathan Corbet , Dave Hansen , , , , , , CC: Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , "H. Peter Anvin" , Ingo Molnar , "Michael Ellerman" , Paul Mackerras , "Thomas Gleixner" , Will Deacon , Matthew Wilcox References: <20211227145903.187152-1-wangkefeng.wang@huawei.com> <20211227145903.187152-4-wangkefeng.wang@huawei.com> <70ff58bc-3a92-55c2-2da8-c5877af72e44@intel.com> <3858de1f-cdbc-ff52-2890-4254d0f48b0a@huawei.com> <31a75f95-6e6e-b640-2d95-08a95ea8cf51@intel.com> <1642472965.lgfksp6krp.astroid@bobo.none> <4488d39f-0698-7bfd-b81c-1e609821818f@intel.com> <1642565468.c0jax91tvn.astroid@bobo.none> From: Kefeng Wang In-Reply-To: <1642565468.c0jax91tvn.astroid@bobo.none> X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggeme705-chm.china.huawei.com (10.1.199.101) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220119_053221_766117_5197BDDC X-CRM114-Status: GOOD ( 26.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: base64 Content-Type: text/plain; charset="utf-8"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Ck9uIDIwMjIvMS8xOSAxMjoxNywgTmljaG9sYXMgUGlnZ2luIHdyb3RlOgo+IEV4Y2VycHRzIGZy b20gRGF2ZSBIYW5zZW4ncyBtZXNzYWdlIG9mIEphbnVhcnkgMTksIDIwMjIgMzoyOCBhbToKPj4g T24gMS8xNy8yMiA2OjQ2IFBNLCBOaWNob2xhcyBQaWdnaW4gd3JvdGU6Cj4+Pj4gVGhpcyBhbGwg c291bmRzIHZlcnkgZnJhZ2lsZSB0byBtZS4gIEV2ZXJ5IHRpbWUgYSBuZXcgYXJjaGl0ZWN0dXJl IHdvdWxkCj4+Pj4gZ2V0IGFkZGVkIGZvciBodWdlIHZtYWxsb2MoKSBzdXBwb3J0LCB0aGUgZGV2 ZWxvcGVyIG5lZWRzIHRvIGtub3cgdG8gZ28KPj4+PiBmaW5kIHRoYXQgYXJjaGl0ZWN0dXJlJ3Mg bW9kdWxlX2FsbG9jKCkgYW5kIGFkZCB0aGlzIGZsYWcuCj4+PiBUaGlzIGlzIGRvY3VtZW50ZWQg aW4gdGhlIEtjb25maWcuCj4+Pgo+Pj4gICAjCj4+PiAgICMgIEFyY2hzIHRoYXQgc2VsZWN0IHRo aXMgd291bGQgYmUgY2FwYWJsZSBvZiBQTUQtc2l6ZWQgdm1hcHMgKGkuZS4sCj4+PiAgICMgIGFy Y2hfdm1hcF9wbWRfc3VwcG9ydGVkKCkgcmV0dXJucyB0cnVlKSwgYW5kIHRoZXkgbXVzdCBtYWtl IG5vIGFzc3VtcHRpb25zCj4+PiAgICMgIHRoYXQgdm1hbGxvYyBtZW1vcnkgaXMgbWFwcGVkIHdp dGggUEFHRV9TSVpFIHB0ZXMuIFRoZSBWTV9OT19IVUdFX1ZNQVAgZmxhZwo+Pj4gICAjICBjYW4g YmUgdXNlZCB0byBwcm9oaWJpdCBhcmNoLXNwZWNpZmljIGFsbG9jYXRpb25zIGZyb20gdXNpbmcg aHVnZXBhZ2VzIHRvCj4+PiAgICMgIGhlbHAgd2l0aCB0aGlzIChlLmcuLCBtb2R1bGVzIG1heSBy ZXF1aXJlIGl0KS4KPj4+ICAgIwo+Pj4gICBjb25maWcgSEFWRV9BUkNIX0hVR0VfVk1BTExPQwo+ Pj4gICAgICAgICAgIGRlcGVuZHMgb24gSEFWRV9BUkNIX0hVR0VfVk1BUAo+Pj4gICAgICAgICAg IGJvb2wKPj4+Cj4+PiBJcyBpdCByZWFsbHkgZmFpciB0byBzYXkgaXQncyAqdmVyeSogZnJhZ2ls ZT8gU3VyZWx5IGl0J3MgcmVhc29uYWJsZSB0bwo+Pj4gcmVhZCB0aGUgKG5vdCB2ZXJ5IGxvbmcp IGRvY3VtZW50YXRpb24gYWQgdW5kZXJzdGFuZCB0aGUgY29uc2VxdWVuY2VzIGZvcgo+Pj4gdGhl IGFyY2ggY29kZSBiZWZvcmUgZW5hYmxpbmcgaXQuCj4+IFZlcnkgZnJhZ2lsZSBvciBub3QsIEkg dGhpbmsgZm9sa3MgYXJlIGxpa2VseSB0byBnZXQgaXQgd3JvbmcuICBJdCB3b3VsZAo+PiBiZSBu aWNlIHRvIGhhdmUgaXQgZGVmYXVsdCAqZXZlcnlvbmUqIHRvIHNhZmUgYW5kIHNsb3cgYW5kIG1h a2UgKnN1cmUqCj4gSXQncyBub3Qgc2FmZSB0byBlbmFibGUgdGhvdWdoLiBUaGF0J3MgdGhlIHBy b2JsZW0uIElmIGl0IHdhcyBqdXN0Cj4gbW9kdWxlcyB0aGVuIHlvdSdkIGhhdmUgYSBwb2ludCBi dXQgaXQgY291bGQgYmUgYW55dGhpbmcuCj4KPj4gdGhleSBnbyBsb29rIGF0IHRoZSBhcmNoaXRl Y3R1cmUgbW9kdWxlcyBjb2RlIGl0c2VsZiBiZWZvcmUgZW5hYmxpbmcKPj4gdGhpcyBmb3IgbW9k dWxlcy4KPiBUaGlzIGlzIHJlcXVpcmVkIG5vdCBqdXN0IGZvciBtb2R1bGVzIGZvciB0aGUgd2hv bGUgYXJjaCBjb2RlLCBpdAo+IGhhcyB0byBiZSBsb29rZWQgYXQgYW5kIGRlY2lkZWQgdGhpcyB3 aWxsIHdvcmsuCj4KPj4gSnVzdCBmcm9tIHRoYXQgS2NvbmZpZyB0ZXh0LCBJIGRvbid0IHRoaW5r IEknZCBrbm93IG9mZiB0aGUgdG9wIG9mIG15Cj4+IGhlYWQgd2hhdCBkbyBkbyBmb3IgeDg2LCBv ciB3aGF0IGNvZGUgSSBuZWVkZWQgdG8gZ28gdG91Y2guCj4gWW91IGhhdmUgdG8gbWFrZSBzdXJl IGFyY2gveDg2IG1ha2VzIG5vIGFzc3VtcHRpb25zIHRoYXQgdm1hbGxvYyBtZW1vcnkKPiBpcyBi YWNrZWQgYnkgUEFHRV9TSVpFIHB0ZXMuIElmIHlvdSBjYW4ndCBkbyB0aGF0IHRoZW4geW91IHNo b3VsZG4ndAo+IGVuYWJsZSB0aGUgb3B0aW9uLiBUaGUgb3B0aW9uIGNhbiBub3QgZXhwbGFpbiBp dCBhbnkgbW9yZSBiZWNhdXNlIGFueQo+IGFyY2ggY291bGQgZG8gYW55dGhpbmcgd2l0aCBpdHMg bWFwcGluZ3MuIFRoZSBtb2R1bGUgY29kZSBpcyBhbiBleGFtcGxlLAo+IG5vdCB0aGUgcmVjaXBl LgoKSGkgTmljaywgRGF2ZSBhbmQgQ2hyaXN0b3BoZe+8jHRoYW5rcyBmb3IgeW91ciByZXZpZXcs wqAgYSBsaXR0bGUgCmNvbmZ1c2VkLMKgwqAgSSB0aGluaywKCjEpIGZvciBwcGMvYXJtNjQgbW9k dWxlX2FsbG9jKCkswqAgaXQgbXVzdCBzZXQgVk1fTk9fSFVHRV9WTUFQIGJlY2F1c2UgdGhlCgph cmNoJ3Mgc2V0X21lbW9yeV8qIGZ1bmNpdG9ucyBjYW4gb25seSBzdXBwb3J0IFBBR0VfU0laRSBt YXBwaW5nLCBkdWUgdG8gdGhlCgpsaW1pdCBvZiBhcHBseV90b19wYWdlX3JhbmdlKCkuCgoyKSBi dXQgZm9yIHg4NidzIG1vZHVsZV9hbGxvYygpLCBhZGQgVk1fTk9fSFVHRV9WTUFQIGlzIHRvIGF2 b2lkIApmcmFnbWVudGF0aW9uLAoKeDg2J3MgX19jaGFuZ2VfcGFnZV9hdHRyIGZ1bmN0aW9ucyB3 aWxsIHNwbGl0IHRoZSBodWdlIG1hcHBpbmcuIHRoaXMgCmZsYWdzIGlzIG5vdCBhIG11c3QuCgoK YW5kIHRoZSBiZWhhdmlvciBhYm92ZSBvY2N1cnJlZCB3aGVuIFNUUklDVF9NT0RVTEVfUldYIGVu YWJsZWQsIHNvCgoxKSBhZGQgYSB1bmlmaWVkIGZ1bmN0aW9uIHRvIHNldCB2bSBmbGFncyhzdWdn ZXN0ZWQgYnkgRGF2ZSApIG9yCgoyKSBhZGQgdm0gZmxhZ3Mgd2l0aCBzb21lIGNvbW1lbnRzIHRv IHBlci1hcmNoJ3MgbW9kdWxlX2FsbG9jKCkKCmFyZSBib3RoIGFjY2VwdGFibGUsIGZvciB0aGUg d2F5IG9mIHVuaWZpZWQgZnVuY3Rpb24gLMKgIHdlIGNvdWxkIG1ha2UgCnRoaXMgYSBkZWZhdWx0 IHJlY2lwZQoKd2l0aCBTVFJJQ1RfTU9EVUxFX1JXWCwgYWxzbyBtYWtlIHR3byBtb3JlIHZtIGZs YWdzIGludG8gaXQsIGVnLAoKK3Vuc2lnbmVkIGxvbmcgbW9kdWxlX2FsbG9jX3ZtX2ZsYWdzKGJv b2wgbmVlZF9mbHVzaF9yZXNldF9wZXJtcykKK3sKK8KgwqDCoMKgwqDCoCB1bnNpZ25lZCBsb25n IHZtX2ZsYWdzID0gVk1fREVGRVJfS01FTUxFQUs7CisKK8KgwqDCoMKgwqDCoCBpZiAobmVlZF9m bHVzaF9yZXNldF9wZXJtcykKK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdm1fZmxhZ3Mg fD0gVk1fRkxVU0hfUkVTRVRfUEVSTVM7CivCoMKgwqDCoMKgwqAgLyoKK8KgwqDCoMKgwqDCoMKg ICogTW9kdWxlcyB1c2UgYSBzaW5nbGUsIGxhcmdlIHZtYWxsb2MoKS4gRGlmZmVyZW50IHBlcm1p c3Npb25zCivCoMKgwqDCoMKgwqDCoCAqIGFyZSBhcHBsaWVkIGxhdGVyIGFuZCB3aWxsIGZyYWdt ZW50IGh1Z2UgbWFwcGluZ3Mgb3IgZXZlbgorwqDCoMKgwqDCoMKgwqAgKiBmYWlscyBpbiBzZXRf bWVtb3J5Xyogb24gc29tZSBhcmNoaXRlY3R1cmVzLiBBdm9pZCB1c2luZworwqDCoMKgwqDCoMKg wqAgKiBodWdlIHBhZ2VzIGZvciBtb2R1bGVzLgorwqDCoMKgwqDCoMKgwqAgKi8KK8KgwqDCoMKg wqDCoCBpZiAoSVNfRU5BQkxFRChDT05GSUdfU1RSSUNUX01PRFVMRV9SV1gpKQorwqDCoMKgwqDC oMKgwqDCoMKgwqDCoMKgwqDCoCB2bV9mbGFncyB8PSBWTV9OT19IVUdFX1ZNQVA7CisKK8KgwqDC oMKgwqDCoCByZXR1cm4gdm1fZmxhZ3M7Cit9Cgp0aGVuIGNhbGxlZCBlYWNoIGFyY2gncyBtb2R1 bGVfYWxsb2MoKS4KCkFueSBzdWdnZXN0aW9uLCBtYW55IHRoYW5rcy4KCgo+Cj4gVGhhbmtzLAo+ IE5pY2sKPiAuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f XwpsaW51eC1hcm0ta2VybmVsIG1haWxpbmcgbGlzdApsaW51eC1hcm0ta2VybmVsQGxpc3RzLmlu ZnJhZGVhZC5vcmcKaHR0cDovL2xpc3RzLmluZnJhZGVhZC5vcmcvbWFpbG1hbi9saXN0aW5mby9s aW51eC1hcm0ta2VybmVsCg==