From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94A6BC433EF for ; Wed, 25 May 2022 14:10:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230307AbiEYOKo (ORCPT ); Wed, 25 May 2022 10:10:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244556AbiEYOKl (ORCPT ); Wed, 25 May 2022 10:10:41 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 07D55A30A3; Wed, 25 May 2022 07:10:40 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9FEC6106F; Wed, 25 May 2022 07:10:39 -0700 (PDT) Received: from FVFF77S0Q05N (unknown [10.57.0.228]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 56BF23F66F; Wed, 25 May 2022 07:10:33 -0700 (PDT) Date: Wed, 25 May 2022 15:10:28 +0100 From: Mark Rutland To: Xu Kuohai Cc: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, Catalin Marinas , Will Deacon , Steven Rostedt , Ingo Molnar , Daniel Borkmann , Alexei Starovoitov , Zi Shen Lim , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , "David S . Miller" , Hideaki YOSHIFUJI , David Ahern , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, hpa@zytor.com, Shuah Khan , Jakub Kicinski , Jesper Dangaard Brouer , Pasha Tatashin , Ard Biesheuvel , Daniel Kiss , Steven Price , Sudeep Holla , Marc Zyngier , Peter Collingbourne , Mark Brown , Delyan Kratunov , Kumar Kartikeya Dwivedi Subject: Re: [PATCH bpf-next v5 4/6] bpf, arm64: Impelment bpf_arch_text_poke() for arm64 Message-ID: References: <20220518131638.3401509-1-xukuohai@huawei.com> <20220518131638.3401509-5-xukuohai@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220518131638.3401509-5-xukuohai@huawei.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 18, 2022 at 09:16:36AM -0400, Xu Kuohai wrote: > Impelment bpf_arch_text_poke() for arm64, so bpf trampoline code can use > it to replace nop with jump, or replace jump with nop. > > Signed-off-by: Xu Kuohai > Acked-by: Song Liu > Reviewed-by: Jakub Sitnicki > --- > arch/arm64/net/bpf_jit.h | 1 + > arch/arm64/net/bpf_jit_comp.c | 107 +++++++++++++++++++++++++++++++--- > 2 files changed, 99 insertions(+), 9 deletions(-) > > diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h > index 194c95ccc1cf..1c4b0075a3e2 100644 > --- a/arch/arm64/net/bpf_jit.h > +++ b/arch/arm64/net/bpf_jit.h > @@ -270,6 +270,7 @@ > #define A64_BTI_C A64_HINT(AARCH64_INSN_HINT_BTIC) > #define A64_BTI_J A64_HINT(AARCH64_INSN_HINT_BTIJ) > #define A64_BTI_JC A64_HINT(AARCH64_INSN_HINT_BTIJC) > +#define A64_NOP A64_HINT(AARCH64_INSN_HINT_NOP) > > /* DMB */ > #define A64_DMB_ISH aarch64_insn_gen_dmb(AARCH64_INSN_MB_ISH) > diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c > index 8ab4035dea27..5ce6ed5f42a1 100644 > --- a/arch/arm64/net/bpf_jit_comp.c > +++ b/arch/arm64/net/bpf_jit_comp.c > @@ -9,6 +9,7 @@ > > #include > #include > +#include > #include > #include > #include > @@ -18,6 +19,7 @@ > #include > #include > #include > +#include > #include > > #include "bpf_jit.h" > @@ -235,13 +237,13 @@ static bool is_lsi_offset(int offset, int scale) > return true; > } > > +#define BTI_INSNS (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) ? 1 : 0) > +#define PAC_INSNS (IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL) ? 1 : 0) > + > /* Tail call offset to jump into */ > -#if IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) || \ > - IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL) > -#define PROLOGUE_OFFSET 9 > -#else > -#define PROLOGUE_OFFSET 8 > -#endif > +#define PROLOGUE_OFFSET (BTI_INSNS + 2 + PAC_INSNS + 8) > +/* Offset of nop instruction in bpf prog entry to be poked */ > +#define POKE_OFFSET (BTI_INSNS + 1) > > static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf) > { > @@ -279,12 +281,15 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf) > * > */ > > + if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) > + emit(A64_BTI_C, ctx); > + > + emit(A64_MOV(1, A64_R(9), A64_LR), ctx); > + emit(A64_NOP, ctx); I take it the idea is to make this the same as the regular ftrace patch-site sequence, so that this can call the same trampoline(s) ? If so, we need some commentary to that effect, and we need some comments in the ftrace code explaining that this needs to be kept in-sync. > + > /* Sign lr */ > if (IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL)) > emit(A64_PACIASP, ctx); > - /* BTI landing pad */ > - else if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) > - emit(A64_BTI_C, ctx); > > /* Save FP and LR registers to stay align with ARM64 AAPCS */ > emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx); > @@ -1529,3 +1534,87 @@ void bpf_jit_free_exec(void *addr) > { > return vfree(addr); > } > + > +static int gen_branch_or_nop(enum aarch64_insn_branch_type type, void *ip, > + void *addr, u32 *insn) > +{ > + if (!addr) > + *insn = aarch64_insn_gen_nop(); > + else > + *insn = aarch64_insn_gen_branch_imm((unsigned long)ip, > + (unsigned long)addr, > + type); > + > + return *insn != AARCH64_BREAK_FAULT ? 0 : -EFAULT; > +} > + > +int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type, > + void *old_addr, void *new_addr) > +{ > + int ret; > + u32 old_insn; > + u32 new_insn; > + u32 replaced; > + unsigned long offset = ~0UL; > + enum aarch64_insn_branch_type branch_type; > + char namebuf[KSYM_NAME_LEN]; > + > + if (!__bpf_address_lookup((unsigned long)ip, NULL, &offset, namebuf)) > + /* Only poking bpf text is supported. Since kernel function > + * entry is set up by ftrace, we reply on ftrace to poke kernel > + * functions. > + */ > + return -EINVAL; > + > + /* bpf entry */ > + if (offset == 0UL) > + /* skip to the nop instruction in bpf prog entry: > + * bti c // if BTI enabled > + * mov x9, x30 > + * nop > + */ > + ip = ip + POKE_OFFSET * AARCH64_INSN_SIZE; When is offset non-zero? is this ever called to patch other instructions, and could this ever be used to try to patch the BTI specifically? I strongly suspect we need a higher-level API to say "poke the patchable callsite in the prologue", rather than assuming that offset 0 always means that, or it'll be *very* easy for this to go wrong. > + > + if (poke_type == BPF_MOD_CALL) > + branch_type = AARCH64_INSN_BRANCH_LINK; > + else > + branch_type = AARCH64_INSN_BRANCH_NOLINK; When is poke_type *not* BPF_MOD_CALL? I assume that means BPF also uses this for non-ftrace reasons? > + if (gen_branch_or_nop(branch_type, ip, old_addr, &old_insn) < 0) > + return -EFAULT; > + > + if (gen_branch_or_nop(branch_type, ip, new_addr, &new_insn) < 0) > + return -EFAULT; > + > + mutex_lock(&text_mutex); > + if (aarch64_insn_read(ip, &replaced)) { > + ret = -EFAULT; > + goto out; > + } > + > + if (replaced != old_insn) { > + ret = -EFAULT; > + goto out; > + } > + > + /* We call aarch64_insn_patch_text_nosync() to replace instruction > + * atomically, so no other CPUs will fetch a half-new and half-old > + * instruction. But there is chance that another CPU fetches the old > + * instruction after bpf_arch_text_poke() finishes, that is, different > + * CPUs may execute different versions of instructions at the same > + * time before the icache is synchronized by hardware. > + * > + * 1. when a new trampoline is attached, it is not an issue for > + * different CPUs to jump to different trampolines temporarily. > + * > + * 2. when an old trampoline is freed, we should wait for all other > + * CPUs to exit the trampoline and make sure the trampoline is no > + * longer reachable, since bpf_tramp_image_put() function already > + * uses percpu_ref and rcu task to do the sync, no need to call the > + * sync interface here. > + */ How is RCU used for that? It's not clear to me how that works for PREEMPT_RCU (which is the usual configuration for arm64), since we can easily be in a preemptible context, outside of an RCU read side critical section, yet call into a trampoline. I know that for livepatching we need to use stacktracing to ensure we've finished using code we'd like to free, and I can't immediately see how you can avoid that here. I'm suspicious that there's still a race where threads can enter the trampoline and it can be subsequently freed. For ftrace today we get away with entering the existing trampolines when not intended because those are statically allocated, and the race is caught when acquiring the ops inside the ftrace core code. This case is different because the CPU can fetch the instruction and execute that at any time, without any RCU involvement. Can you give more details on how the scheme described above works? How *exactly*` do you ensure that threads which have entered the trampoline (and may have been immediately preempted by an interrupt) have returned? Which RCU mechanism are you using? If you can point me at where this is implemented I'm happy to take a look. Thanks, Mark. > + ret = aarch64_insn_patch_text_nosync(ip, new_insn); > +out: > + mutex_unlock(&text_mutex); > + return ret; > +} > -- > 2.30.2 > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E23C2C433EF for ; Wed, 25 May 2022 14:11:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2diC8AaSoM3LPu2nm8Xc/NK/9yiZF9dQH6ElYuqkTLk=; b=Dl9eBQbJXcAgpK g6gZj1fqFmt0dU6Kidfcz0/QBPmLfuAE6jej0Plejb+f/6f6fc1hQMTGdE1l/s9pd90Iaf/3daAQi 7J87GWEZuNMVT5YNgEN1tAV6SQVsdjvPpPd2VmekOwXGAIHLt6p6BLDIN2immJ/a3FNMbAnaDIrHj PcwwL1o/vbZdqotujW/TNTkE/hpXRInWTo9WUdaxSHIAp+2u/9UKeO3vxwiCj3n3B3ijg3ZRDUmRH L4XYOOCy2Qv9bWEHGPUU7S8tXN+roSDstJwCvXae30+PsapvIjyux7r+MN00I3vNc6T0i0GsEigBA ei+CZGKwTL2PvqPxSHyQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ntrin-00BOUX-5w; Wed, 25 May 2022 14:10:45 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ntrij-00BOTh-Ef for linux-arm-kernel@lists.infradead.org; Wed, 25 May 2022 14:10:43 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9FEC6106F; Wed, 25 May 2022 07:10:39 -0700 (PDT) Received: from FVFF77S0Q05N (unknown [10.57.0.228]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 56BF23F66F; Wed, 25 May 2022 07:10:33 -0700 (PDT) Date: Wed, 25 May 2022 15:10:28 +0100 From: Mark Rutland To: Xu Kuohai Cc: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, Catalin Marinas , Will Deacon , Steven Rostedt , Ingo Molnar , Daniel Borkmann , Alexei Starovoitov , Zi Shen Lim , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , "David S . Miller" , Hideaki YOSHIFUJI , David Ahern , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, hpa@zytor.com, Shuah Khan , Jakub Kicinski , Jesper Dangaard Brouer , Pasha Tatashin , Ard Biesheuvel , Daniel Kiss , Steven Price , Sudeep Holla , Marc Zyngier , Peter Collingbourne , Mark Brown , Delyan Kratunov , Kumar Kartikeya Dwivedi Subject: Re: [PATCH bpf-next v5 4/6] bpf, arm64: Impelment bpf_arch_text_poke() for arm64 Message-ID: References: <20220518131638.3401509-1-xukuohai@huawei.com> <20220518131638.3401509-5-xukuohai@huawei.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220518131638.3401509-5-xukuohai@huawei.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220525_071041_632005_1FC0B693 X-CRM114-Status: GOOD ( 45.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, May 18, 2022 at 09:16:36AM -0400, Xu Kuohai wrote: > Impelment bpf_arch_text_poke() for arm64, so bpf trampoline code can use > it to replace nop with jump, or replace jump with nop. > > Signed-off-by: Xu Kuohai > Acked-by: Song Liu > Reviewed-by: Jakub Sitnicki > --- > arch/arm64/net/bpf_jit.h | 1 + > arch/arm64/net/bpf_jit_comp.c | 107 +++++++++++++++++++++++++++++++--- > 2 files changed, 99 insertions(+), 9 deletions(-) > > diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h > index 194c95ccc1cf..1c4b0075a3e2 100644 > --- a/arch/arm64/net/bpf_jit.h > +++ b/arch/arm64/net/bpf_jit.h > @@ -270,6 +270,7 @@ > #define A64_BTI_C A64_HINT(AARCH64_INSN_HINT_BTIC) > #define A64_BTI_J A64_HINT(AARCH64_INSN_HINT_BTIJ) > #define A64_BTI_JC A64_HINT(AARCH64_INSN_HINT_BTIJC) > +#define A64_NOP A64_HINT(AARCH64_INSN_HINT_NOP) > > /* DMB */ > #define A64_DMB_ISH aarch64_insn_gen_dmb(AARCH64_INSN_MB_ISH) > diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c > index 8ab4035dea27..5ce6ed5f42a1 100644 > --- a/arch/arm64/net/bpf_jit_comp.c > +++ b/arch/arm64/net/bpf_jit_comp.c > @@ -9,6 +9,7 @@ > > #include > #include > +#include > #include > #include > #include > @@ -18,6 +19,7 @@ > #include > #include > #include > +#include > #include > > #include "bpf_jit.h" > @@ -235,13 +237,13 @@ static bool is_lsi_offset(int offset, int scale) > return true; > } > > +#define BTI_INSNS (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) ? 1 : 0) > +#define PAC_INSNS (IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL) ? 1 : 0) > + > /* Tail call offset to jump into */ > -#if IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) || \ > - IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL) > -#define PROLOGUE_OFFSET 9 > -#else > -#define PROLOGUE_OFFSET 8 > -#endif > +#define PROLOGUE_OFFSET (BTI_INSNS + 2 + PAC_INSNS + 8) > +/* Offset of nop instruction in bpf prog entry to be poked */ > +#define POKE_OFFSET (BTI_INSNS + 1) > > static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf) > { > @@ -279,12 +281,15 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf) > * > */ > > + if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) > + emit(A64_BTI_C, ctx); > + > + emit(A64_MOV(1, A64_R(9), A64_LR), ctx); > + emit(A64_NOP, ctx); I take it the idea is to make this the same as the regular ftrace patch-site sequence, so that this can call the same trampoline(s) ? If so, we need some commentary to that effect, and we need some comments in the ftrace code explaining that this needs to be kept in-sync. > + > /* Sign lr */ > if (IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL)) > emit(A64_PACIASP, ctx); > - /* BTI landing pad */ > - else if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) > - emit(A64_BTI_C, ctx); > > /* Save FP and LR registers to stay align with ARM64 AAPCS */ > emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx); > @@ -1529,3 +1534,87 @@ void bpf_jit_free_exec(void *addr) > { > return vfree(addr); > } > + > +static int gen_branch_or_nop(enum aarch64_insn_branch_type type, void *ip, > + void *addr, u32 *insn) > +{ > + if (!addr) > + *insn = aarch64_insn_gen_nop(); > + else > + *insn = aarch64_insn_gen_branch_imm((unsigned long)ip, > + (unsigned long)addr, > + type); > + > + return *insn != AARCH64_BREAK_FAULT ? 0 : -EFAULT; > +} > + > +int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type, > + void *old_addr, void *new_addr) > +{ > + int ret; > + u32 old_insn; > + u32 new_insn; > + u32 replaced; > + unsigned long offset = ~0UL; > + enum aarch64_insn_branch_type branch_type; > + char namebuf[KSYM_NAME_LEN]; > + > + if (!__bpf_address_lookup((unsigned long)ip, NULL, &offset, namebuf)) > + /* Only poking bpf text is supported. Since kernel function > + * entry is set up by ftrace, we reply on ftrace to poke kernel > + * functions. > + */ > + return -EINVAL; > + > + /* bpf entry */ > + if (offset == 0UL) > + /* skip to the nop instruction in bpf prog entry: > + * bti c // if BTI enabled > + * mov x9, x30 > + * nop > + */ > + ip = ip + POKE_OFFSET * AARCH64_INSN_SIZE; When is offset non-zero? is this ever called to patch other instructions, and could this ever be used to try to patch the BTI specifically? I strongly suspect we need a higher-level API to say "poke the patchable callsite in the prologue", rather than assuming that offset 0 always means that, or it'll be *very* easy for this to go wrong. > + > + if (poke_type == BPF_MOD_CALL) > + branch_type = AARCH64_INSN_BRANCH_LINK; > + else > + branch_type = AARCH64_INSN_BRANCH_NOLINK; When is poke_type *not* BPF_MOD_CALL? I assume that means BPF also uses this for non-ftrace reasons? > + if (gen_branch_or_nop(branch_type, ip, old_addr, &old_insn) < 0) > + return -EFAULT; > + > + if (gen_branch_or_nop(branch_type, ip, new_addr, &new_insn) < 0) > + return -EFAULT; > + > + mutex_lock(&text_mutex); > + if (aarch64_insn_read(ip, &replaced)) { > + ret = -EFAULT; > + goto out; > + } > + > + if (replaced != old_insn) { > + ret = -EFAULT; > + goto out; > + } > + > + /* We call aarch64_insn_patch_text_nosync() to replace instruction > + * atomically, so no other CPUs will fetch a half-new and half-old > + * instruction. But there is chance that another CPU fetches the old > + * instruction after bpf_arch_text_poke() finishes, that is, different > + * CPUs may execute different versions of instructions at the same > + * time before the icache is synchronized by hardware. > + * > + * 1. when a new trampoline is attached, it is not an issue for > + * different CPUs to jump to different trampolines temporarily. > + * > + * 2. when an old trampoline is freed, we should wait for all other > + * CPUs to exit the trampoline and make sure the trampoline is no > + * longer reachable, since bpf_tramp_image_put() function already > + * uses percpu_ref and rcu task to do the sync, no need to call the > + * sync interface here. > + */ How is RCU used for that? It's not clear to me how that works for PREEMPT_RCU (which is the usual configuration for arm64), since we can easily be in a preemptible context, outside of an RCU read side critical section, yet call into a trampoline. I know that for livepatching we need to use stacktracing to ensure we've finished using code we'd like to free, and I can't immediately see how you can avoid that here. I'm suspicious that there's still a race where threads can enter the trampoline and it can be subsequently freed. For ftrace today we get away with entering the existing trampolines when not intended because those are statically allocated, and the race is caught when acquiring the ops inside the ftrace core code. This case is different because the CPU can fetch the instruction and execute that at any time, without any RCU involvement. Can you give more details on how the scheme described above works? How *exactly*` do you ensure that threads which have entered the trampoline (and may have been immediately preempted by an interrupt) have returned? Which RCU mechanism are you using? If you can point me at where this is implemented I'm happy to take a look. Thanks, Mark. > + ret = aarch64_insn_patch_text_nosync(ip, new_insn); > +out: > + mutex_unlock(&text_mutex); > + return ret; > +} > -- > 2.30.2 > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel