From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F34E1C433F5 for ; Tue, 14 Sep 2021 11:23:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C49A4610FB for ; Tue, 14 Sep 2021 11:23:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232057AbhINLY5 (ORCPT ); Tue, 14 Sep 2021 07:24:57 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:19973 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231879AbhINLYy (ORCPT ); Tue, 14 Sep 2021 07:24:54 -0400 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4H818L3TQfzbmLM; Tue, 14 Sep 2021 19:19:30 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 14 Sep 2021 19:23:35 +0800 Received: from [10.174.178.55] (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 14 Sep 2021 19:23:35 +0800 Subject: Re: [PATCH] arm64: entry: Improve the performance of system calls To: Mark Rutland CC: Catalin Marinas , Will Deacon , linux-arm-kernel , References: <20210903121950.2284-1-thunder.leizhen@huawei.com> <20210914095436.GA26544@C02TD0UTHF1T.local> From: "Leizhen (ThunderTown)" Message-ID: <1156204d-b48f-8416-a805-78274463bc81@huawei.com> Date: Tue, 14 Sep 2021 19:23:35 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.0 MIME-Version: 1.0 In-Reply-To: <20210914095436.GA26544@C02TD0UTHF1T.local> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/9/14 17:55, Mark Rutland wrote: > Hi, > > On Fri, Sep 03, 2021 at 08:19:50PM +0800, Zhen Lei wrote: >> Commit 582f95835a8f ("arm64: entry: convert el0_sync to C") converted lots >> of functions from assembly to C, this greatly improves readability. But >> el0_svc()/el0_svc_compat() is in response to system call requests from >> user mode and may be in the hot path. >> >> Although the SVC is in the first case of the switch statement in C, the >> compiler optimizes the switch statement as a whole, and does not give SVC >> a small boost. >> >> Use "likely()" to help SVC directly invoke its handler after a simple >> judgment to avoid entering the switch table lookup process. >> >> After: >> 0000000000000ff0 : >> ff0: d503245f bti c >> ff4: d503233f paciasp >> ff8: a9bf7bfd stp x29, x30, [sp, #-16]! >> ffc: 910003fd mov x29, sp >> 1000: d5385201 mrs x1, esr_el1 >> 1004: 531a7c22 lsr w2, w1, #26 >> 1008: f100545f cmp x2, #0x15 >> 100c: 540000a1 b.ne 1020 >> 1010: 97fffe14 bl 860 >> 1014: a8c17bfd ldp x29, x30, [sp], #16 >> 1018: d50323bf autiasp >> 101c: d65f03c0 ret >> 1020: f100705f cmp x2, #0x1c > > It would be helpful if you could state which toolchain and config was > used to generate the above. gcc version 7.3.0 (GCC), make defconfig > > For comparison, what was the code generation like before? I assume > el0_svc wasn't the target of the first test and branch? Assuming so, how > many tests and branches were there before the call to el0_svc()? 0000000000000a10 : a10: a9bf7bfd stp x29, x30, [sp, #-16]! a14: 910003fd mov x29, sp a18: d5385201 mrs x1, esr_el1 a1c: 531a7c22 lsr w2, w1, #26 a20: f100f05f cmp x2, #0x3c a24: 54000068 b.hi a30 // b.pmore a28: 7100f05f cmp w2, #0x3c a2c: 540000a9 b.ls a40 // b.plast a30: 97ffffc8 bl 950 a34: a8c17bfd ldp x29, x30, [sp], #16 a38: d65f03c0 ret a3c: d503201f nop a40: 90000003 adrp x3, 0 a44: 91000063 add x3, x3, #0x0 a48: 38624862 ldrb w2, [x3, w2, uxtw] a4c: 10000063 adr x3, a58 a50: 8b228862 add x2, x3, w2, sxtb #2 a54: d61f0040 br x2 a58: 97ffff9e bl 8d0 a5c: 17fffff6 b a34 a60: 97ffff2c bl 710 a64: 17fffff4 b a34 a68: 97ffff46 bl 780 a6c: 17fffff2 b a34 a70: 97fffece bl 5a8 a74: 17fffff0 b a34 a78: 97ffff50 bl 7b8 a7c: 17ffffee b a34 a80: 97fffedc bl 5f0 a84: 17ffffec b a34 a88: 97ffffa4 bl 918 a8c: 17ffffea b a34 a90: 97ffff12 bl 6d8 a94: 17ffffe8 b a34 a98: 97fffeba bl 580 a9c: 17ffffe6 b a34 aa0: 97ffff80 bl 8a0 aa4: 17ffffe4 b a34 aa8: 97fffefe bl 6a0 aac: 17ffffe2 b a34 ab0: 97ffff26 bl 748 ab4: 17ffffe0 b a34 ab8: 97ffff6e bl 870 abc: 17ffffde b a34 > > At a high-level, I'm not too keen on special-casing things unless > necessary. > > I wonder if we could get similar results without special-casing by using > a static const array of handlers indexed by the EC, since (with GCC > 11.1.0 from the kernel.org crosstool page) that can result in code like: > > 0000000000001010 : > 1010: d503245f bti c > 1014: d503233f paciasp > 1018: a9bf7bfd stp x29, x30, [sp, #-16]! > 101c: 910003fd mov x29, sp > 1020: d5385201 mrs x1, esr_el1 > 1024: 90000002 adrp x2, 0 > 1028: 531a7c23 lsr w3, w1, #26 > 102c: 91000042 add x2, x2, #:lo12: > 1030: f8637842 ldr x2, [x2, x3, lsl #3] > 1034: d63f0040 blr x2 > 1038: a8c17bfd ldp x29, x30, [sp], #16 > 103c: d50323bf autiasp > 1040: d65f03c0 ret > > ... which might do better by virtue of reducing a chain of potential > mispredicts down to a single potential mispredict, and dynamic branch > prediction hopefully does a good job of predicting the common case at > runtime. That said, the resulting tables will be pretty big... a48: 38624862 ldrb w2, [x3, w2, uxtw] a4c: 10000063 adr x3, a58 a50: 8b228862 add x2, x3, w2, sxtb #2 a54: d61f0040 br x2 The original implementation also generated a query table, but yours is more concise. I will try to test it. Looks like a better solution. > >> >> Execute "./lat_syscall null" on my board (BogoMIPS : 200.00), it can save >> about 10ns. >> >> Before: >> Simple syscall: 0.2365 microseconds >> Simple syscall: 0.2354 microseconds >> Simple syscall: 0.2339 microseconds >> >> After: >> Simple syscall: 0.2255 microseconds >> Simple syscall: 0.2254 microseconds >> Simple syscall: 0.2256 microseconds > > I appreciate this can be seen by a microbenchmark, but does this have an > impact on a real workload? I'd imagine that real syscall usage will > dominate this in practice, and this would fall into the noise. The product side has a test plan, but the progress will be slow. > > Thanks, > Mark. > >> Signed-off-by: Zhen Lei >> --- >> arch/arm64/kernel/entry-common.c | 18 ++++++++++++------ >> 1 file changed, 12 insertions(+), 6 deletions(-) >> >> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c >> index 32f9796c4ffe77b..062eb5a895ec6f3 100644 >> --- a/arch/arm64/kernel/entry-common.c >> +++ b/arch/arm64/kernel/entry-common.c >> @@ -607,11 +607,14 @@ static void noinstr el0_fpac(struct pt_regs *regs, unsigned long esr) >> asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs) >> { >> unsigned long esr = read_sysreg(esr_el1); >> + unsigned long ec = ESR_ELx_EC(esr); >> >> - switch (ESR_ELx_EC(esr)) { >> - case ESR_ELx_EC_SVC64: >> + if (likely(ec == ESR_ELx_EC_SVC64)) { >> el0_svc(regs); >> - break; >> + return; >> + } >> + >> + switch (ec) { >> case ESR_ELx_EC_DABT_LOW: >> el0_da(regs, esr); >> break; >> @@ -730,11 +733,14 @@ static void noinstr el0_svc_compat(struct pt_regs *regs) >> asmlinkage void noinstr el0t_32_sync_handler(struct pt_regs *regs) >> { >> unsigned long esr = read_sysreg(esr_el1); >> + unsigned long ec = ESR_ELx_EC(esr); >> >> - switch (ESR_ELx_EC(esr)) { >> - case ESR_ELx_EC_SVC32: >> + if (likely(ec == ESR_ELx_EC_SVC32)) { >> el0_svc_compat(regs); >> - break; >> + return; >> + } >> + >> + switch (ec) { >> case ESR_ELx_EC_DABT_LOW: >> el0_da(regs, esr); >> break; >> -- >> 2.25.1 >> > . > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0DFEC433F5 for ; Tue, 14 Sep 2021 11:25:52 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8E534610FB for ; Tue, 14 Sep 2021 11:25:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8E534610FB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date: Message-ID:From:References:CC:To:Subject:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=bBjasSFgT1B8pQGxkzDwdNuktm8bkJXVFu0qXG7MmVo=; b=TqB3g1nFVNDoOvAjOnVmFlx+93 bbIOCP6LzDEKBOItWTJJg5H65hR/PnC3eGkruzyndSIA7Oh28tSIH8IDIWcsWieIa/s09lLqQfBQx AuoyTCxpAFI4rZkRezWAOhG6t/fK9RlTBL957w5JCgB06uRPpSU5TWQkkxuONklamnK1Oh2wioTVU ccfpo4CYRyxXrm7NFi0AtM7hxLUMOIxoMTl5RMgsjumeWN3SxIncZTkNckIyxXtZ880t1o2PDs9L2 jVv4dVkgR13qGrCjzkiP9xA6gHSdGHtTonZFoihW0OKCLJU0sIH+yuB/BKI0PmrpL5zE/r8tJT46w DiTAUjpQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mQ6XS-005O3z-MC; Tue, 14 Sep 2021 11:23:46 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mQ6XO-005O2f-CA for linux-arm-kernel@lists.infradead.org; Tue, 14 Sep 2021 11:23:44 +0000 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4H818L3TQfzbmLM; Tue, 14 Sep 2021 19:19:30 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 14 Sep 2021 19:23:35 +0800 Received: from [10.174.178.55] (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 14 Sep 2021 19:23:35 +0800 Subject: Re: [PATCH] arm64: entry: Improve the performance of system calls To: Mark Rutland CC: Catalin Marinas , Will Deacon , linux-arm-kernel , References: <20210903121950.2284-1-thunder.leizhen@huawei.com> <20210914095436.GA26544@C02TD0UTHF1T.local> From: "Leizhen (ThunderTown)" Message-ID: <1156204d-b48f-8416-a805-78274463bc81@huawei.com> Date: Tue, 14 Sep 2021 19:23:35 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.0 MIME-Version: 1.0 In-Reply-To: <20210914095436.GA26544@C02TD0UTHF1T.local> Content-Language: en-US X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210914_042342_832724_01D02BFD X-CRM114-Status: GOOD ( 31.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2021/9/14 17:55, Mark Rutland wrote: > Hi, > > On Fri, Sep 03, 2021 at 08:19:50PM +0800, Zhen Lei wrote: >> Commit 582f95835a8f ("arm64: entry: convert el0_sync to C") converted lots >> of functions from assembly to C, this greatly improves readability. But >> el0_svc()/el0_svc_compat() is in response to system call requests from >> user mode and may be in the hot path. >> >> Although the SVC is in the first case of the switch statement in C, the >> compiler optimizes the switch statement as a whole, and does not give SVC >> a small boost. >> >> Use "likely()" to help SVC directly invoke its handler after a simple >> judgment to avoid entering the switch table lookup process. >> >> After: >> 0000000000000ff0 : >> ff0: d503245f bti c >> ff4: d503233f paciasp >> ff8: a9bf7bfd stp x29, x30, [sp, #-16]! >> ffc: 910003fd mov x29, sp >> 1000: d5385201 mrs x1, esr_el1 >> 1004: 531a7c22 lsr w2, w1, #26 >> 1008: f100545f cmp x2, #0x15 >> 100c: 540000a1 b.ne 1020 >> 1010: 97fffe14 bl 860 >> 1014: a8c17bfd ldp x29, x30, [sp], #16 >> 1018: d50323bf autiasp >> 101c: d65f03c0 ret >> 1020: f100705f cmp x2, #0x1c > > It would be helpful if you could state which toolchain and config was > used to generate the above. gcc version 7.3.0 (GCC), make defconfig > > For comparison, what was the code generation like before? I assume > el0_svc wasn't the target of the first test and branch? Assuming so, how > many tests and branches were there before the call to el0_svc()? 0000000000000a10 : a10: a9bf7bfd stp x29, x30, [sp, #-16]! a14: 910003fd mov x29, sp a18: d5385201 mrs x1, esr_el1 a1c: 531a7c22 lsr w2, w1, #26 a20: f100f05f cmp x2, #0x3c a24: 54000068 b.hi a30 // b.pmore a28: 7100f05f cmp w2, #0x3c a2c: 540000a9 b.ls a40 // b.plast a30: 97ffffc8 bl 950 a34: a8c17bfd ldp x29, x30, [sp], #16 a38: d65f03c0 ret a3c: d503201f nop a40: 90000003 adrp x3, 0 a44: 91000063 add x3, x3, #0x0 a48: 38624862 ldrb w2, [x3, w2, uxtw] a4c: 10000063 adr x3, a58 a50: 8b228862 add x2, x3, w2, sxtb #2 a54: d61f0040 br x2 a58: 97ffff9e bl 8d0 a5c: 17fffff6 b a34 a60: 97ffff2c bl 710 a64: 17fffff4 b a34 a68: 97ffff46 bl 780 a6c: 17fffff2 b a34 a70: 97fffece bl 5a8 a74: 17fffff0 b a34 a78: 97ffff50 bl 7b8 a7c: 17ffffee b a34 a80: 97fffedc bl 5f0 a84: 17ffffec b a34 a88: 97ffffa4 bl 918 a8c: 17ffffea b a34 a90: 97ffff12 bl 6d8 a94: 17ffffe8 b a34 a98: 97fffeba bl 580 a9c: 17ffffe6 b a34 aa0: 97ffff80 bl 8a0 aa4: 17ffffe4 b a34 aa8: 97fffefe bl 6a0 aac: 17ffffe2 b a34 ab0: 97ffff26 bl 748 ab4: 17ffffe0 b a34 ab8: 97ffff6e bl 870 abc: 17ffffde b a34 > > At a high-level, I'm not too keen on special-casing things unless > necessary. > > I wonder if we could get similar results without special-casing by using > a static const array of handlers indexed by the EC, since (with GCC > 11.1.0 from the kernel.org crosstool page) that can result in code like: > > 0000000000001010 : > 1010: d503245f bti c > 1014: d503233f paciasp > 1018: a9bf7bfd stp x29, x30, [sp, #-16]! > 101c: 910003fd mov x29, sp > 1020: d5385201 mrs x1, esr_el1 > 1024: 90000002 adrp x2, 0 > 1028: 531a7c23 lsr w3, w1, #26 > 102c: 91000042 add x2, x2, #:lo12: > 1030: f8637842 ldr x2, [x2, x3, lsl #3] > 1034: d63f0040 blr x2 > 1038: a8c17bfd ldp x29, x30, [sp], #16 > 103c: d50323bf autiasp > 1040: d65f03c0 ret > > ... which might do better by virtue of reducing a chain of potential > mispredicts down to a single potential mispredict, and dynamic branch > prediction hopefully does a good job of predicting the common case at > runtime. That said, the resulting tables will be pretty big... a48: 38624862 ldrb w2, [x3, w2, uxtw] a4c: 10000063 adr x3, a58 a50: 8b228862 add x2, x3, w2, sxtb #2 a54: d61f0040 br x2 The original implementation also generated a query table, but yours is more concise. I will try to test it. Looks like a better solution. > >> >> Execute "./lat_syscall null" on my board (BogoMIPS : 200.00), it can save >> about 10ns. >> >> Before: >> Simple syscall: 0.2365 microseconds >> Simple syscall: 0.2354 microseconds >> Simple syscall: 0.2339 microseconds >> >> After: >> Simple syscall: 0.2255 microseconds >> Simple syscall: 0.2254 microseconds >> Simple syscall: 0.2256 microseconds > > I appreciate this can be seen by a microbenchmark, but does this have an > impact on a real workload? I'd imagine that real syscall usage will > dominate this in practice, and this would fall into the noise. The product side has a test plan, but the progress will be slow. > > Thanks, > Mark. > >> Signed-off-by: Zhen Lei >> --- >> arch/arm64/kernel/entry-common.c | 18 ++++++++++++------ >> 1 file changed, 12 insertions(+), 6 deletions(-) >> >> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c >> index 32f9796c4ffe77b..062eb5a895ec6f3 100644 >> --- a/arch/arm64/kernel/entry-common.c >> +++ b/arch/arm64/kernel/entry-common.c >> @@ -607,11 +607,14 @@ static void noinstr el0_fpac(struct pt_regs *regs, unsigned long esr) >> asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs) >> { >> unsigned long esr = read_sysreg(esr_el1); >> + unsigned long ec = ESR_ELx_EC(esr); >> >> - switch (ESR_ELx_EC(esr)) { >> - case ESR_ELx_EC_SVC64: >> + if (likely(ec == ESR_ELx_EC_SVC64)) { >> el0_svc(regs); >> - break; >> + return; >> + } >> + >> + switch (ec) { >> case ESR_ELx_EC_DABT_LOW: >> el0_da(regs, esr); >> break; >> @@ -730,11 +733,14 @@ static void noinstr el0_svc_compat(struct pt_regs *regs) >> asmlinkage void noinstr el0t_32_sync_handler(struct pt_regs *regs) >> { >> unsigned long esr = read_sysreg(esr_el1); >> + unsigned long ec = ESR_ELx_EC(esr); >> >> - switch (ESR_ELx_EC(esr)) { >> - case ESR_ELx_EC_SVC32: >> + if (likely(ec == ESR_ELx_EC_SVC32)) { >> el0_svc_compat(regs); >> - break; >> + return; >> + } >> + >> + switch (ec) { >> case ESR_ELx_EC_DABT_LOW: >> el0_da(regs, esr); >> break; >> -- >> 2.25.1 >> > . > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel