From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7F02C433EF for ; Thu, 26 May 2022 09:50:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344002AbiEZJug (ORCPT ); Thu, 26 May 2022 05:50:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237035AbiEZJuf (ORCPT ); Thu, 26 May 2022 05:50:35 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 35B56B492 for ; Thu, 26 May 2022 02:50:33 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 665981474; Thu, 26 May 2022 02:50:33 -0700 (PDT) Received: from FVFF77S0Q05N (unknown [10.57.2.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 28ACF3F73D; Thu, 26 May 2022 02:50:27 -0700 (PDT) Date: Thu, 26 May 2022 10:50:23 +0100 From: Mark Rutland To: Tong Tiangen Cc: James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , x86@kernel.org, "H . Peter Anvin" , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Kefeng Wang , Xie XiuQi , Guohanjun Subject: Re: [PATCH -next v4 3/7] arm64: add support for machine check error safe Message-ID: References: <20220420030418.3189040-1-tongtiangen@huawei.com> <20220420030418.3189040-4-tongtiangen@huawei.com> <46e5954c-a9a8-f4a8-07cc-de42e2753051@huawei.com> <87bdb1c6-5803-d9c0-9208-432027ae1d8b@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <87bdb1c6-5803-d9c0-9208-432027ae1d8b@huawei.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 26, 2022 at 11:36:41AM +0800, Tong Tiangen wrote: > > > 在 2022/5/25 16:30, Mark Rutland 写道: > > On Thu, May 19, 2022 at 02:29:54PM +0800, Tong Tiangen wrote: > > > > > > > > > 在 2022/5/13 23:26, Mark Rutland 写道: > > > > On Wed, Apr 20, 2022 at 03:04:14AM +0000, Tong Tiangen wrote: > > > > > During the processing of arm64 kernel hardware memory errors(do_sea()), if > > > > > the errors is consumed in the kernel, the current processing is panic. > > > > > However, it is not optimal. > > > > > > > > > > Take uaccess for example, if the uaccess operation fails due to memory > > > > > error, only the user process will be affected, kill the user process > > > > > and isolate the user page with hardware memory errors is a better choice. > > > > > > > > Conceptually, I'm fine with the idea of constraining what we do for a > > > > true uaccess, but I don't like the implementation of this at all, and I > > > > think we first need to clean up the arm64 extable usage to clearly > > > > distinguish a uaccess from another access. > > > > > > OK,using EX_TYPE_UACCESS and this extable type could be recover, this is > > > more reasonable. > > > > Great. > > > > > For EX_TYPE_UACCESS_ERR_ZERO, today we use it for kernel accesses in a > > > couple of cases, such as > > > get_user/futex/__user_cache_maint()/__user_swpX_asm(), > > > > Those are all user accesses. > > > > However, __get_kernel_nofault() and __put_kernel_nofault() use > > EX_TYPE_UACCESS_ERR_ZERO by way of __{get,put}_mem_asm(), so we'd need to > > refactor that code to split the user/kernel cases higher up the callchain. > > > > > your suggestion is: > > > get_user continues to use EX_TYPE_UACCESS_ERR_ZERO and the other cases use > > > new type EX_TYPE_FIXUP_ERR_ZERO? > > > > Yes, that's the rough shape. We could make the latter EX_TYPE_KACCESS_ERR_ZERO > > to be clearly analogous to EX_TYPE_UACCESS_ERR_ZERO, and with that I susepct we > > could remove EX_TYPE_FIXUP. > > > > Thanks, > > Mark. > According to your suggestion, i think the definition is like this: > > #define EX_TYPE_NONE 0 > #define EX_TYPE_FIXUP 1 --> delete > #define EX_TYPE_BPF 2 > #define EX_TYPE_UACCESS_ERR_ZERO 3 > #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 > #define EX_TYPE_UACCESS xx --> add > #define EX_TYPE_KACCESS_ERR_ZERO xx --> add > [The value defined by the macro here is temporary] Almost; you don't need to add EX_TYPE_UACCESS here, as you can use EX_TYPE_UACCESS_ERR_ZERO for that. We already have: | #define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err) \ | _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, wzr) ... and we can add: | #define _ASM_EXTABLE_UACCESS(insn, fixup) \ | _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) ... and maybe we should use 'xzr' rather than 'wzr' for clarity. > There are two points to modify: > > 1、_get_kernel_nofault() and __put_kernel_nofault() using > EX_TYPE_KACCESS_ERR_ZERO, Other positions using EX_TYPE_UACCESS_ERR_ZERO > keep unchanged. That sounds right to me. This will require refactoring __raw_{get,put}_mem() and __{get,put}_mem_asm(). > 2、delete EX_TYPE_FIXUP. > > There is no doubt about others. As for EX_TYPE_FIXUP, I think it needs to be > retained, _cond_extable(EX_TYPE_FIXUP) is still in use in assembler.h. We use _cond_extable for cache maintenance uaccesses, so those should be moved over to to EX_TYPE_UACCESS_ERR_ZERO. We can rename _cond_extable to _cond_uaccess_extable for clarity. That will require restructuring asm-extable.h a bit. If that turns out to be painful I'm happy to take a look. Thanks, Mark. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B7817C433EF for ; Thu, 26 May 2022 09:51:39 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4L839k1yq4z3c8C for ; Thu, 26 May 2022 19:51:38 +1000 (AEST) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=arm.com (client-ip=217.140.110.172; helo=foss.arm.com; envelope-from=mark.rutland@arm.com; receiver=) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lists.ozlabs.org (Postfix) with ESMTP id 4L839C6DhDz2yw1 for ; Thu, 26 May 2022 19:51:07 +1000 (AEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 665981474; Thu, 26 May 2022 02:50:33 -0700 (PDT) Received: from FVFF77S0Q05N (unknown [10.57.2.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 28ACF3F73D; Thu, 26 May 2022 02:50:27 -0700 (PDT) Date: Thu, 26 May 2022 10:50:23 +0100 From: Mark Rutland To: Tong Tiangen Subject: Re: [PATCH -next v4 3/7] arm64: add support for machine check error safe Message-ID: References: <20220420030418.3189040-1-tongtiangen@huawei.com> <20220420030418.3189040-4-tongtiangen@huawei.com> <46e5954c-a9a8-f4a8-07cc-de42e2753051@huawei.com> <87bdb1c6-5803-d9c0-9208-432027ae1d8b@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <87bdb1c6-5803-d9c0-9208-432027ae1d8b@huawei.com> X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kefeng Wang , Dave Hansen , linux-mm@kvack.org, Paul Mackerras , Guohanjun , Will Deacon , "H . Peter Anvin" , x86@kernel.org, Ingo Molnar , Catalin Marinas , Xie XiuQi , Borislav Petkov , Alexander Viro , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Robin Murphy , linux-kernel@vger.kernel.org, James Morse , Andrew Morton , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Thu, May 26, 2022 at 11:36:41AM +0800, Tong Tiangen wrote: > > > 在 2022/5/25 16:30, Mark Rutland 写道: > > On Thu, May 19, 2022 at 02:29:54PM +0800, Tong Tiangen wrote: > > > > > > > > > 在 2022/5/13 23:26, Mark Rutland 写道: > > > > On Wed, Apr 20, 2022 at 03:04:14AM +0000, Tong Tiangen wrote: > > > > > During the processing of arm64 kernel hardware memory errors(do_sea()), if > > > > > the errors is consumed in the kernel, the current processing is panic. > > > > > However, it is not optimal. > > > > > > > > > > Take uaccess for example, if the uaccess operation fails due to memory > > > > > error, only the user process will be affected, kill the user process > > > > > and isolate the user page with hardware memory errors is a better choice. > > > > > > > > Conceptually, I'm fine with the idea of constraining what we do for a > > > > true uaccess, but I don't like the implementation of this at all, and I > > > > think we first need to clean up the arm64 extable usage to clearly > > > > distinguish a uaccess from another access. > > > > > > OK,using EX_TYPE_UACCESS and this extable type could be recover, this is > > > more reasonable. > > > > Great. > > > > > For EX_TYPE_UACCESS_ERR_ZERO, today we use it for kernel accesses in a > > > couple of cases, such as > > > get_user/futex/__user_cache_maint()/__user_swpX_asm(), > > > > Those are all user accesses. > > > > However, __get_kernel_nofault() and __put_kernel_nofault() use > > EX_TYPE_UACCESS_ERR_ZERO by way of __{get,put}_mem_asm(), so we'd need to > > refactor that code to split the user/kernel cases higher up the callchain. > > > > > your suggestion is: > > > get_user continues to use EX_TYPE_UACCESS_ERR_ZERO and the other cases use > > > new type EX_TYPE_FIXUP_ERR_ZERO? > > > > Yes, that's the rough shape. We could make the latter EX_TYPE_KACCESS_ERR_ZERO > > to be clearly analogous to EX_TYPE_UACCESS_ERR_ZERO, and with that I susepct we > > could remove EX_TYPE_FIXUP. > > > > Thanks, > > Mark. > According to your suggestion, i think the definition is like this: > > #define EX_TYPE_NONE 0 > #define EX_TYPE_FIXUP 1 --> delete > #define EX_TYPE_BPF 2 > #define EX_TYPE_UACCESS_ERR_ZERO 3 > #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 > #define EX_TYPE_UACCESS xx --> add > #define EX_TYPE_KACCESS_ERR_ZERO xx --> add > [The value defined by the macro here is temporary] Almost; you don't need to add EX_TYPE_UACCESS here, as you can use EX_TYPE_UACCESS_ERR_ZERO for that. We already have: | #define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err) \ | _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, wzr) ... and we can add: | #define _ASM_EXTABLE_UACCESS(insn, fixup) \ | _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) ... and maybe we should use 'xzr' rather than 'wzr' for clarity. > There are two points to modify: > > 1、_get_kernel_nofault() and __put_kernel_nofault() using > EX_TYPE_KACCESS_ERR_ZERO, Other positions using EX_TYPE_UACCESS_ERR_ZERO > keep unchanged. That sounds right to me. This will require refactoring __raw_{get,put}_mem() and __{get,put}_mem_asm(). > 2、delete EX_TYPE_FIXUP. > > There is no doubt about others. As for EX_TYPE_FIXUP, I think it needs to be > retained, _cond_extable(EX_TYPE_FIXUP) is still in use in assembler.h. We use _cond_extable for cache maintenance uaccesses, so those should be moved over to to EX_TYPE_UACCESS_ERR_ZERO. We can rename _cond_extable to _cond_uaccess_extable for clarity. That will require restructuring asm-extable.h a bit. If that turns out to be painful I'm happy to take a look. Thanks, Mark. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8C49FC433EF for ; Thu, 26 May 2022 09:51:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+TeRaULcnA4LjqylZ3JcAHkv6RdAAjhJG5Ycg5yGK2I=; b=P0rtZXJWl5Kp5r NbZNeufU1Yplvh1zIlMzxApHue7Aj/OrwqaVCEEXWnDz4NNr2mP6cLIsQm6oAlEonCH3fz4v3qTWc XYljPdPFaIszpAHKmPUl/Qq0Iorua7CoE1F5fHN3YP0SxoeZXNynifIAf7DCNA53MJboIYYs42Y83 D0l53Q3KuJQX0qmkNuC5s6yXDSlivsGJf7+FWggMWQGEG/5QZlZUsB8FwwuIjoEugwxk0bd43Pi5L LzWOpdwzaIgOcNROSaJ/Mb2S1MdE+tVdcm8g/C56fBzjUVC6LKZoBa3/pJhYDzo02n0vsMsy5JyIT jxJLXu3Mhvdw6ZDmx4Tg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nuA8f-00EHN7-68; Thu, 26 May 2022 09:50:41 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nuA8b-00EHIc-Ij for linux-arm-kernel@lists.infradead.org; Thu, 26 May 2022 09:50:39 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 665981474; Thu, 26 May 2022 02:50:33 -0700 (PDT) Received: from FVFF77S0Q05N (unknown [10.57.2.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 28ACF3F73D; Thu, 26 May 2022 02:50:27 -0700 (PDT) Date: Thu, 26 May 2022 10:50:23 +0100 From: Mark Rutland To: Tong Tiangen Cc: James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , x86@kernel.org, "H . Peter Anvin" , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Kefeng Wang , Xie XiuQi , Guohanjun Subject: Re: [PATCH -next v4 3/7] arm64: add support for machine check error safe Message-ID: References: <20220420030418.3189040-1-tongtiangen@huawei.com> <20220420030418.3189040-4-tongtiangen@huawei.com> <46e5954c-a9a8-f4a8-07cc-de42e2753051@huawei.com> <87bdb1c6-5803-d9c0-9208-432027ae1d8b@huawei.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <87bdb1c6-5803-d9c0-9208-432027ae1d8b@huawei.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220526_025037_755363_ECD10BA0 X-CRM114-Status: GOOD ( 37.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org T24gVGh1LCBNYXkgMjYsIDIwMjIgYXQgMTE6MzY6NDFBTSArMDgwMCwgVG9uZyBUaWFuZ2VuIHdy b3RlOgo+IAo+IAo+IOWcqCAyMDIyLzUvMjUgMTY6MzAsIE1hcmsgUnV0bGFuZCDlhpnpgZM6Cj4g PiBPbiBUaHUsIE1heSAxOSwgMjAyMiBhdCAwMjoyOTo1NFBNICswODAwLCBUb25nIFRpYW5nZW4g d3JvdGU6Cj4gPiA+IAo+ID4gPiAKPiA+ID4g5ZyoIDIwMjIvNS8xMyAyMzoyNiwgTWFyayBSdXRs YW5kIOWGmemBkzoKPiA+ID4gPiBPbiBXZWQsIEFwciAyMCwgMjAyMiBhdCAwMzowNDoxNEFNICsw MDAwLCBUb25nIFRpYW5nZW4gd3JvdGU6Cj4gPiA+ID4gPiBEdXJpbmcgdGhlIHByb2Nlc3Npbmcg b2YgYXJtNjQga2VybmVsIGhhcmR3YXJlIG1lbW9yeSBlcnJvcnMoZG9fc2VhKCkpLCBpZgo+ID4g PiA+ID4gdGhlIGVycm9ycyBpcyBjb25zdW1lZCBpbiB0aGUga2VybmVsLCB0aGUgY3VycmVudCBw cm9jZXNzaW5nIGlzIHBhbmljLgo+ID4gPiA+ID4gSG93ZXZlciwgaXQgaXMgbm90IG9wdGltYWwu Cj4gPiA+ID4gPiAKPiA+ID4gPiA+IFRha2UgdWFjY2VzcyBmb3IgZXhhbXBsZSwgaWYgdGhlIHVh Y2Nlc3Mgb3BlcmF0aW9uIGZhaWxzIGR1ZSB0byBtZW1vcnkKPiA+ID4gPiA+IGVycm9yLCBvbmx5 IHRoZSB1c2VyIHByb2Nlc3Mgd2lsbCBiZSBhZmZlY3RlZCwga2lsbCB0aGUgdXNlciBwcm9jZXNz Cj4gPiA+ID4gPiBhbmQgaXNvbGF0ZSB0aGUgdXNlciBwYWdlIHdpdGggaGFyZHdhcmUgbWVtb3J5 IGVycm9ycyBpcyBhIGJldHRlciBjaG9pY2UuCj4gPiA+ID4gCj4gPiA+ID4gQ29uY2VwdHVhbGx5 LCBJJ20gZmluZSB3aXRoIHRoZSBpZGVhIG9mIGNvbnN0cmFpbmluZyB3aGF0IHdlIGRvIGZvciBh Cj4gPiA+ID4gdHJ1ZSB1YWNjZXNzLCBidXQgSSBkb24ndCBsaWtlIHRoZSBpbXBsZW1lbnRhdGlv biBvZiB0aGlzIGF0IGFsbCwgYW5kIEkKPiA+ID4gPiB0aGluayB3ZSBmaXJzdCBuZWVkIHRvIGNs ZWFuIHVwIHRoZSBhcm02NCBleHRhYmxlIHVzYWdlIHRvIGNsZWFybHkKPiA+ID4gPiBkaXN0aW5n dWlzaCBhIHVhY2Nlc3MgZnJvbSBhbm90aGVyIGFjY2Vzcy4KPiA+ID4gCj4gPiA+IE9LLHVzaW5n IEVYX1RZUEVfVUFDQ0VTUyBhbmQgdGhpcyBleHRhYmxlIHR5cGUgY291bGQgYmUgcmVjb3Zlciwg dGhpcyBpcwo+ID4gPiBtb3JlIHJlYXNvbmFibGUuCj4gPiAKPiA+IEdyZWF0Lgo+ID4gCj4gPiA+ IEZvciBFWF9UWVBFX1VBQ0NFU1NfRVJSX1pFUk8sIHRvZGF5IHdlIHVzZSBpdCBmb3Iga2VybmVs IGFjY2Vzc2VzIGluIGEKPiA+ID4gY291cGxlIG9mIGNhc2VzLCBzdWNoIGFzCj4gPiA+IGdldF91 c2VyL2Z1dGV4L19fdXNlcl9jYWNoZV9tYWludCgpL19fdXNlcl9zd3BYX2FzbSgpLAo+ID4gCj4g PiBUaG9zZSBhcmUgYWxsIHVzZXIgYWNjZXNzZXMuCj4gPiAKPiA+IEhvd2V2ZXIsIF9fZ2V0X2tl cm5lbF9ub2ZhdWx0KCkgYW5kIF9fcHV0X2tlcm5lbF9ub2ZhdWx0KCkgdXNlCj4gPiBFWF9UWVBF X1VBQ0NFU1NfRVJSX1pFUk8gYnkgd2F5IG9mIF9fe2dldCxwdXR9X21lbV9hc20oKSwgc28gd2Un ZCBuZWVkIHRvCj4gPiByZWZhY3RvciB0aGF0IGNvZGUgdG8gc3BsaXQgdGhlIHVzZXIva2VybmVs IGNhc2VzIGhpZ2hlciB1cCB0aGUgY2FsbGNoYWluLgo+ID4gCj4gPiA+IHlvdXIgc3VnZ2VzdGlv biBpczoKPiA+ID4gZ2V0X3VzZXIgY29udGludWVzIHRvIHVzZSBFWF9UWVBFX1VBQ0NFU1NfRVJS X1pFUk8gYW5kIHRoZSBvdGhlciBjYXNlcyB1c2UKPiA+ID4gbmV3IHR5cGUgRVhfVFlQRV9GSVhV UF9FUlJfWkVSTz8KPiA+IAo+ID4gWWVzLCB0aGF0J3MgdGhlIHJvdWdoIHNoYXBlLiBXZSBjb3Vs ZCBtYWtlIHRoZSBsYXR0ZXIgRVhfVFlQRV9LQUNDRVNTX0VSUl9aRVJPCj4gPiB0byBiZSBjbGVh cmx5IGFuYWxvZ291cyB0byBFWF9UWVBFX1VBQ0NFU1NfRVJSX1pFUk8sIGFuZCB3aXRoIHRoYXQg SSBzdXNlcGN0IHdlCj4gPiBjb3VsZCByZW1vdmUgRVhfVFlQRV9GSVhVUC4KPiA+IAo+ID4gVGhh bmtzLAo+ID4gTWFyay4KPiBBY2NvcmRpbmcgdG8geW91ciBzdWdnZXN0aW9uLCBpIHRoaW5rIHRo ZSBkZWZpbml0aW9uIGlzIGxpa2UgdGhpczoKPiAKPiAjZGVmaW5lIEVYX1RZUEVfTk9ORSAgICAg ICAgICAgICAgICAgICAgMAo+ICNkZWZpbmUgRVhfVFlQRV9GSVhVUCAgICAgICAgICAgICAgICAg ICAxICAgIC0tPiBkZWxldGUKPiAjZGVmaW5lIEVYX1RZUEVfQlBGICAgICAgICAgICAgICAgICAg ICAgMgo+ICNkZWZpbmUgRVhfVFlQRV9VQUNDRVNTX0VSUl9aRVJPICAgICAgICAzCj4gI2RlZmlu ZSBFWF9UWVBFX0xPQURfVU5BTElHTkVEX1pFUk9QQUQgIDQKPiAjZGVmaW5lIEVYX1RZUEVfVUFD Q0VTUwkJICAgICAgICB4eCAgIC0tPiBhZGQKPiAjZGVmaW5lIEVYX1RZUEVfS0FDQ0VTU19FUlJf WkVSTyAgICAgICAgeHggICAtLT4gYWRkCj4gW1RoZSB2YWx1ZSBkZWZpbmVkIGJ5IHRoZSBtYWNy byBoZXJlIGlzIHRlbXBvcmFyeV0KCkFsbW9zdDsgeW91IGRvbid0IG5lZWQgdG8gYWRkIEVYX1RZ UEVfVUFDQ0VTUyBoZXJlLCBhcyB5b3UgY2FuIHVzZQpFWF9UWVBFX1VBQ0NFU1NfRVJSX1pFUk8g Zm9yIHRoYXQuCgpXZSBhbHJlYWR5IGhhdmU6Cgp8ICNkZWZpbmUgX0FTTV9FWFRBQkxFX1VBQ0NF U1NfRVJSKGluc24sIGZpeHVwLCBlcnIpCQlcCnwgICAgICAgICBfQVNNX0VYVEFCTEVfVUFDQ0VT U19FUlJfWkVSTyhpbnNuLCBmaXh1cCwgZXJyLCB3enIpCgouLi4gYW5kIHdlIGNhbiBhZGQ6Cgp8 ICNkZWZpbmUgX0FTTV9FWFRBQkxFX1VBQ0NFU1MoaW5zbiwgZml4dXApCQkJXAp8ICAgICAgICAg X0FTTV9FWFRBQkxFX1VBQ0NFU1NfRVJSX1pFUk8oaW5zbiwgZml4dXAsIHd6ciwgd3pyKQoKCi4u LiBhbmQgbWF5YmUgd2Ugc2hvdWxkIHVzZSAneHpyJyByYXRoZXIgdGhhbiAnd3pyJyBmb3IgY2xh cml0eS4KCj4gVGhlcmUgYXJlIHR3byBwb2ludHMgdG8gbW9kaWZ5Ogo+IAo+IDHjgIFfZ2V0X2tl cm5lbF9ub2ZhdWx0KCkgYW5kIF9fcHV0X2tlcm5lbF9ub2ZhdWx0KCkgIHVzaW5nCj4gRVhfVFlQ RV9LQUNDRVNTX0VSUl9aRVJPLCBPdGhlciBwb3NpdGlvbnMgdXNpbmcgRVhfVFlQRV9VQUNDRVNT X0VSUl9aRVJPCj4ga2VlcCB1bmNoYW5nZWQuCgpUaGF0IHNvdW5kcyByaWdodCB0byBtZS4gVGhp cyB3aWxsIHJlcXVpcmUgcmVmYWN0b3JpbmcgX19yYXdfe2dldCxwdXR9X21lbSgpCmFuZCBfX3tn ZXQscHV0fV9tZW1fYXNtKCkuCgo+IDLjgIFkZWxldGUgRVhfVFlQRV9GSVhVUC4KPiAKPiBUaGVy ZSBpcyBubyBkb3VidCBhYm91dCBvdGhlcnMuIEFzIGZvciBFWF9UWVBFX0ZJWFVQLCBJIHRoaW5r IGl0IG5lZWRzIHRvIGJlCj4gcmV0YWluZWQsIF9jb25kX2V4dGFibGUoRVhfVFlQRV9GSVhVUCkg aXMgc3RpbGwgaW4gdXNlIGluIGFzc2VtYmxlci5oLgoKV2UgdXNlIF9jb25kX2V4dGFibGUgZm9y IGNhY2hlIG1haW50ZW5hbmNlIHVhY2Nlc3Nlcywgc28gdGhvc2Ugc2hvdWxkIGJlIG1vdmVkCm92 ZXIgdG8gdG8gRVhfVFlQRV9VQUNDRVNTX0VSUl9aRVJPLiBXZSBjYW4gcmVuYW1lIF9jb25kX2V4 dGFibGUgdG8KX2NvbmRfdWFjY2Vzc19leHRhYmxlIGZvciBjbGFyaXR5LgoKVGhhdCB3aWxsIHJl cXVpcmUgcmVzdHJ1Y3R1cmluZyBhc20tZXh0YWJsZS5oIGEgYml0LiBJZiB0aGF0IHR1cm5zIG91 dCB0byBiZQpwYWluZnVsIEknbSBoYXBweSB0byB0YWtlIGEgbG9vay4KClRoYW5rcywKTWFyay4K Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCmxpbnV4LWFy bS1rZXJuZWwgbWFpbGluZyBsaXN0CmxpbnV4LWFybS1rZXJuZWxAbGlzdHMuaW5mcmFkZWFkLm9y ZwpodHRwOi8vbGlzdHMuaW5mcmFkZWFkLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2xpbnV4LWFybS1r ZXJuZWwK