From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9522C433EF for ; Wed, 20 Apr 2022 02:45:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1359041AbiDTCsi (ORCPT ); Tue, 19 Apr 2022 22:48:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1359019AbiDTCsY (ORCPT ); Tue, 19 Apr 2022 22:48:24 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F0F531DFC for ; Tue, 19 Apr 2022 19:45:39 -0700 (PDT) Received: from kwepemi500009.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KjlKh2Q8MzCr9W; Wed, 20 Apr 2022 10:41:12 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi500009.china.huawei.com (7.221.188.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:37 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:35 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v4 3/7] arm64: add support for machine check error safe Date: Wed, 20 Apr 2022 03:04:14 +0000 Message-ID: <20220420030418.3189040-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420030418.3189040-1-tongtiangen@huawei.com> References: <20220420030418.3189040-1-tongtiangen@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org During the processing of arm64 kernel hardware memory errors(do_sea()), if the errors is consumed in the kernel, the current processing is panic. However, it is not optimal. Take uaccess for example, if the uaccess operation fails due to memory error, only the user process will be affected, kill the user process and isolate the user page with hardware memory errors is a better choice. This patch only enable machine error check framework, it add exception fixup before kernel panic in do_sea() and only limit the consumption of hardware memory errors in kernel mode triggered by user mode processes. If fixup successful, panic can be avoided. Consistent with PPC/x86, it is implemented by CONFIG_ARCH_HAS_COPY_MC. Also add copy_mc_to_user() in include/linux/uaccess.h, this helper is called when CONFIG_ARCH_HAS_COPOY_MC is open. Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/extable.h | 1 + arch/arm64/mm/extable.c | 17 +++++++++++++++++ arch/arm64/mm/fault.c | 27 ++++++++++++++++++++++++++- include/linux/uaccess.h | 9 +++++++++ 5 files changed, 54 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index d9325dd95eba..012e38309955 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -19,6 +19,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..f80ebd0addfd 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, #endif /* !CONFIG_BPF_JIT */ bool fixup_exception(struct pt_regs *regs); +bool fixup_exception_mc(struct pt_regs *regs); #endif diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 489455309695..4f0083a550d4 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -9,6 +9,7 @@ #include #include +#include static inline unsigned long get_ex_fixup(const struct exception_table_entry *ex) @@ -84,3 +85,19 @@ bool fixup_exception(struct pt_regs *regs) BUG(); } + +bool fixup_exception_mc(struct pt_regs *regs) +{ + const struct exception_table_entry *ex; + + ex = search_exception_tables(instruction_pointer(regs)); + if (!ex) + return false; + + /* + * This is not complete, More Machine check safe extable type can + * be processed here. + */ + + return false; +} diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 77341b160aca..a9e6fb1999d1 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -695,6 +695,29 @@ static int do_bad(unsigned long far, unsigned int esr, struct pt_regs *regs) return 1; /* "fault" */ } +static bool arm64_do_kernel_sea(unsigned long addr, unsigned int esr, + struct pt_regs *regs, int sig, int code) +{ + if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) + return false; + + if (user_mode(regs) || !current->mm) + return false; + + if (apei_claim_sea(regs) < 0) + return false; + + if (!fixup_exception_mc(regs)) + return false; + + set_thread_esr(0, esr); + + arm64_force_sig_fault(sig, code, addr, + "Uncorrected hardware memory error in kernel-access\n"); + + return true; +} + static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs) { const struct fault_info *inf; @@ -720,7 +743,9 @@ static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs) */ siaddr = untagged_addr(far); } - arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); + + if (!arm64_do_kernel_sea(siaddr, esr, regs, inf->sig, inf->code)) + arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); return 0; } diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 546179418ffa..884661b29c17 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -174,6 +174,15 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + check_object_size(src, cnt, true); + return raw_copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; -- 2.25.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50134C433F5 for ; Wed, 20 Apr 2022 02:47:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PjZwNFYVb0sYk4bwv3Pd5NbcOgM7tYyQW2mIPWDtSFQ=; b=yloglCJXNUA62W rLYUydaxbWqtmm94nFPe4I+jGmDCJAzV9Q3/ZAk++QntvyOjUjmabwj83dE4lNMr4kfp7FCK4nHvD bZjkIO9vw8AL3bB9mhJgf3KqDthrPe7xAPx/0pNmjac00Llx2X+C08YxdOB3y6gjwH30T94b8JH/7 Iiv7KfplHglyvMSvBWWXXYYSLRyJSXGsUTzWiIt6XWnt+BpNKWhzltHi6lciorojJ0m6Vxgy5LPd2 Ll2LCWTDO1+RbBP6FgnjFtUUfvWgEOIdIFcD4Cu9uLlcy5U0SoOE9wh2d4fEtAU5tHxMCUH5H3rH+ 5mBCKh8AaaXmucksZclw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nh0M7-0070FV-3A; Wed, 20 Apr 2022 02:46:11 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nh0Lc-006zz7-H0 for linux-arm-kernel@lists.infradead.org; Wed, 20 Apr 2022 02:45:42 +0000 Received: from kwepemi500009.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KjlKh2Q8MzCr9W; Wed, 20 Apr 2022 10:41:12 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi500009.china.huawei.com (7.221.188.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:37 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:35 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v4 3/7] arm64: add support for machine check error safe Date: Wed, 20 Apr 2022 03:04:14 +0000 Message-ID: <20220420030418.3189040-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420030418.3189040-1-tongtiangen@huawei.com> References: <20220420030418.3189040-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220419_194540_951438_FBCC3526 X-CRM114-Status: GOOD ( 19.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org During the processing of arm64 kernel hardware memory errors(do_sea()), if the errors is consumed in the kernel, the current processing is panic. However, it is not optimal. Take uaccess for example, if the uaccess operation fails due to memory error, only the user process will be affected, kill the user process and isolate the user page with hardware memory errors is a better choice. This patch only enable machine error check framework, it add exception fixup before kernel panic in do_sea() and only limit the consumption of hardware memory errors in kernel mode triggered by user mode processes. If fixup successful, panic can be avoided. Consistent with PPC/x86, it is implemented by CONFIG_ARCH_HAS_COPY_MC. Also add copy_mc_to_user() in include/linux/uaccess.h, this helper is called when CONFIG_ARCH_HAS_COPOY_MC is open. Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/extable.h | 1 + arch/arm64/mm/extable.c | 17 +++++++++++++++++ arch/arm64/mm/fault.c | 27 ++++++++++++++++++++++++++- include/linux/uaccess.h | 9 +++++++++ 5 files changed, 54 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index d9325dd95eba..012e38309955 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -19,6 +19,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..f80ebd0addfd 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, #endif /* !CONFIG_BPF_JIT */ bool fixup_exception(struct pt_regs *regs); +bool fixup_exception_mc(struct pt_regs *regs); #endif diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 489455309695..4f0083a550d4 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -9,6 +9,7 @@ #include #include +#include static inline unsigned long get_ex_fixup(const struct exception_table_entry *ex) @@ -84,3 +85,19 @@ bool fixup_exception(struct pt_regs *regs) BUG(); } + +bool fixup_exception_mc(struct pt_regs *regs) +{ + const struct exception_table_entry *ex; + + ex = search_exception_tables(instruction_pointer(regs)); + if (!ex) + return false; + + /* + * This is not complete, More Machine check safe extable type can + * be processed here. + */ + + return false; +} diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 77341b160aca..a9e6fb1999d1 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -695,6 +695,29 @@ static int do_bad(unsigned long far, unsigned int esr, struct pt_regs *regs) return 1; /* "fault" */ } +static bool arm64_do_kernel_sea(unsigned long addr, unsigned int esr, + struct pt_regs *regs, int sig, int code) +{ + if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) + return false; + + if (user_mode(regs) || !current->mm) + return false; + + if (apei_claim_sea(regs) < 0) + return false; + + if (!fixup_exception_mc(regs)) + return false; + + set_thread_esr(0, esr); + + arm64_force_sig_fault(sig, code, addr, + "Uncorrected hardware memory error in kernel-access\n"); + + return true; +} + static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs) { const struct fault_info *inf; @@ -720,7 +743,9 @@ static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs) */ siaddr = untagged_addr(far); } - arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); + + if (!arm64_do_kernel_sea(siaddr, esr, regs, inf->sig, inf->code)) + arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); return 0; } diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 546179418ffa..884661b29c17 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -174,6 +174,15 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + check_object_size(src, cnt, true); + return raw_copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; -- 2.25.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 840E5C433EF for ; Wed, 20 Apr 2022 04:36:46 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4Kjnv11V7hz3cfj for ; Wed, 20 Apr 2022 14:36:45 +1000 (AEST) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=huawei.com (client-ip=45.249.212.189; helo=szxga03-in.huawei.com; envelope-from=tongtiangen@huawei.com; receiver=) Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4Kjln91H3pz2yLg for ; Wed, 20 Apr 2022 13:01:30 +1000 (AEST) Received: from kwepemi500009.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KjlKh2Q8MzCr9W; Wed, 20 Apr 2022 10:41:12 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi500009.china.huawei.com (7.221.188.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:37 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 20 Apr 2022 10:45:35 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" Subject: [PATCH -next v4 3/7] arm64: add support for machine check error safe Date: Wed, 20 Apr 2022 03:04:14 +0000 Message-ID: <20220420030418.3189040-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220420030418.3189040-1-tongtiangen@huawei.com> References: <20220420030418.3189040-1-tongtiangen@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Mailman-Approved-At: Wed, 20 Apr 2022 14:34:53 +1000 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kefeng Wang , Xie XiuQi , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Tong Tiangen , Guohanjun , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" During the processing of arm64 kernel hardware memory errors(do_sea()), if the errors is consumed in the kernel, the current processing is panic. However, it is not optimal. Take uaccess for example, if the uaccess operation fails due to memory error, only the user process will be affected, kill the user process and isolate the user page with hardware memory errors is a better choice. This patch only enable machine error check framework, it add exception fixup before kernel panic in do_sea() and only limit the consumption of hardware memory errors in kernel mode triggered by user mode processes. If fixup successful, panic can be avoided. Consistent with PPC/x86, it is implemented by CONFIG_ARCH_HAS_COPY_MC. Also add copy_mc_to_user() in include/linux/uaccess.h, this helper is called when CONFIG_ARCH_HAS_COPOY_MC is open. Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/extable.h | 1 + arch/arm64/mm/extable.c | 17 +++++++++++++++++ arch/arm64/mm/fault.c | 27 ++++++++++++++++++++++++++- include/linux/uaccess.h | 9 +++++++++ 5 files changed, 54 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index d9325dd95eba..012e38309955 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -19,6 +19,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..f80ebd0addfd 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, #endif /* !CONFIG_BPF_JIT */ bool fixup_exception(struct pt_regs *regs); +bool fixup_exception_mc(struct pt_regs *regs); #endif diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 489455309695..4f0083a550d4 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -9,6 +9,7 @@ #include #include +#include static inline unsigned long get_ex_fixup(const struct exception_table_entry *ex) @@ -84,3 +85,19 @@ bool fixup_exception(struct pt_regs *regs) BUG(); } + +bool fixup_exception_mc(struct pt_regs *regs) +{ + const struct exception_table_entry *ex; + + ex = search_exception_tables(instruction_pointer(regs)); + if (!ex) + return false; + + /* + * This is not complete, More Machine check safe extable type can + * be processed here. + */ + + return false; +} diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 77341b160aca..a9e6fb1999d1 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -695,6 +695,29 @@ static int do_bad(unsigned long far, unsigned int esr, struct pt_regs *regs) return 1; /* "fault" */ } +static bool arm64_do_kernel_sea(unsigned long addr, unsigned int esr, + struct pt_regs *regs, int sig, int code) +{ + if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) + return false; + + if (user_mode(regs) || !current->mm) + return false; + + if (apei_claim_sea(regs) < 0) + return false; + + if (!fixup_exception_mc(regs)) + return false; + + set_thread_esr(0, esr); + + arm64_force_sig_fault(sig, code, addr, + "Uncorrected hardware memory error in kernel-access\n"); + + return true; +} + static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs) { const struct fault_info *inf; @@ -720,7 +743,9 @@ static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs) */ siaddr = untagged_addr(far); } - arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); + + if (!arm64_do_kernel_sea(siaddr, esr, regs, inf->sig, inf->code)) + arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); return 0; } diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 546179418ffa..884661b29c17 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -174,6 +174,15 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + check_object_size(src, cnt, true); + return raw_copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; -- 2.25.1