From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A092DC433F5 for ; Fri, 3 Dec 2021 10:49:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wtacv5U4tY84+XINL4V9oyhlG8uKck2Y5tbwBTgZTw0=; b=B+hztAyTeyq7V/ jBfjJnPgbMYKtmbcradPQxxQMxc1aN4mLOdA+KxoI4HGvM7pvCnUzhdICppsyPL6zI0NYvkZqJj+N Icvs9pZVnBLc/Z2pVgYRNAwuC0C3DuYmk7wKFx+CyH2i9pXNapnIrjUKtxcx2/MMsUE8NDnHMM+Wr tTXqwef1IGZ0H92M0hFUyrkQz+vrAoADoNG0InK/ncvzcE7i2InHcu1Xizd9MDHBjoCEmgZkHhoVp rYwpf8bJaCxC4DEdzCw7UgOgYUM3Z2NagzlAAXZvcUYrpatQV9ijOB7kTv6wjQ1L9MeyRAvd8jOwI Wc2JYbHpxn/4SjfMh/ZA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mt66w-00FEp3-TP; Fri, 03 Dec 2021 10:48:15 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mt66M-00FEa9-6k for linux-arm-kernel@lists.infradead.org; Fri, 03 Dec 2021 10:47:40 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BF48C1597; Fri, 3 Dec 2021 02:47:37 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8A4E03F5A1; Fri, 3 Dec 2021 02:47:36 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: andre.przywara@arm.com, ardb@kernel.org, catalin.marinas@arm.com, james.morse@arm.com, joey.gouly@arm.com, mark.rutland@arm.com, suzuki.poulose@arm.com, will@kernel.org Subject: [PATCH 3/4] arm64: patching: unify stop_machine() patch synchronization Date: Fri, 3 Dec 2021 10:47:22 +0000 Message-Id: <20211203104723.3412383-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211203104723.3412383-1-mark.rutland@arm.com> References: <20211203104723.3412383-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211203_024738_404635_A2018FBE X-CRM114-Status: GOOD ( 21.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Some instruction sequences cannot be safely modified while they may be concurrently executed, and so it's necessary to temporarily stop all CPUs while performing the modification. We have separate implementations of this for alternatives and kprobes. This patch unifies these with a common patch_machine() helper function which handles the necessary synchronization to ensure that CPUs are stopped during patching. This separates the patching logic, making it easier to understand, and means that we only have to maintain one synchronization algorithm. The synchronization logic in do_patch_machine() only uses unpatchable functions, and the function itself is marked `noinstr` to prevent instrumentation. The patch_machine() helper is left instrumentatble as stop_machine() is instrumentable, and therefore there is no benefit to forbidding instrumentation. As with the prior alternative patching sequence, the CPU to apply the patch is chosen early so that this may be deterministic. Since __apply_alternatives_stopped() is only ever called once under apply_alternatives_all(), the `all_alternatives_applied` variable and warning are redundant and therefore removed. Signed-off-by: Mark Rutland Cc: Andre Przywara Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: James Morse Cc: Joey Gouly Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/patching.h | 4 ++ arch/arm64/kernel/alternative.c | 40 +++----------- arch/arm64/kernel/patching.c | 91 +++++++++++++++++++++++++------ 3 files changed, 84 insertions(+), 51 deletions(-) diff --git a/arch/arm64/include/asm/patching.h b/arch/arm64/include/asm/patching.h index 6bf5adc56295..25c199bc55d2 100644 --- a/arch/arm64/include/asm/patching.h +++ b/arch/arm64/include/asm/patching.h @@ -10,4 +10,8 @@ int aarch64_insn_write(void *addr, u32 insn); int aarch64_insn_patch_text_nosync(void *addr, u32 insn); int aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt); +typedef int (*patch_machine_func_t)(void *); +int patch_machine_cpuslocked(patch_machine_func_t func, void *arg); +int patch_machine(patch_machine_func_t func, void *arg); + #endif /* __ASM_PATCHING_H */ diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c index 4f32d4425aac..d2b4b9e6a0e4 100644 --- a/arch/arm64/kernel/alternative.c +++ b/arch/arm64/kernel/alternative.c @@ -14,8 +14,8 @@ #include #include #include +#include #include -#include #define __ALT_PTR(a, f) ((void *)&(a)->f + (a)->f) #define ALT_ORIG_PTR(a) __ALT_PTR(a, orig_offset) @@ -189,43 +189,17 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu } } -/* - * Apply alternatives, ensuring that no CPUs are concurrently executing code - * being patched. - * - * We might be patching the stop_machine state machine or READ_ONCE(), so - * we implement a simple polling protocol. - */ -static int __apply_alternatives_multi_stop(void *unused) +static int __apply_alternatives_stopped(void *unused) { - /* Volatile, as we may be patching the guts of READ_ONCE() */ - static volatile int all_alternatives_applied; - static atomic_t stopped_cpus = ATOMIC_INIT(0); struct alt_region region = { .begin = (struct alt_instr *)__alt_instructions, .end = (struct alt_instr *)__alt_instructions_end, }; + DECLARE_BITMAP(remaining_capabilities, ARM64_NPATCHABLE); - /* We always have a CPU 0 at this point (__init) */ - if (smp_processor_id()) { - arch_atomic_inc(&stopped_cpus); - while (!all_alternatives_applied) - cpu_relax(); - isb(); - } else { - DECLARE_BITMAP(remaining_capabilities, ARM64_NPATCHABLE); - - while (arch_atomic_read(&stopped_cpus) != num_online_cpus() - 1) - cpu_relax(); - - bitmap_complement(remaining_capabilities, boot_capabilities, - ARM64_NPATCHABLE); - - BUG_ON(all_alternatives_applied); - __apply_alternatives(®ion, false, remaining_capabilities); - /* Barriers provided by the cache flushing */ - all_alternatives_applied = 1; - } + bitmap_complement(remaining_capabilities, boot_capabilities, + ARM64_NPATCHABLE); + __apply_alternatives(®ion, false, remaining_capabilities); return 0; } @@ -233,7 +207,7 @@ static int __apply_alternatives_multi_stop(void *unused) void __init apply_alternatives_all(void) { /* better not try code patching on a live SMP system */ - stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask); + patch_machine(__apply_alternatives_stopped, NULL); } /* diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c index c0d51340c913..04497dbf14e2 100644 --- a/arch/arm64/kernel/patching.c +++ b/arch/arm64/kernel/patching.c @@ -105,31 +105,88 @@ int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn) return ret; } +struct patch_machine_info { + patch_machine_func_t func; + void *arg; + int cpu; + atomic_t active; + volatile int done; +}; + +/* + * Run a code patching function on a single CPU, ensuring that no CPUs are + * concurrently executing code being patched. + * + * We wait for other CPUs to become quiescent before starting patching, and + * wait until patching is completed before other CPUs are woken. + * + * The patching function is responsible for any barriers necessary to make new + * instructions visible to other CPUs. The other CPUs will issue an ISB upon + * being woken to ensure they use the new instructions. + */ +static int noinstr do_patch_machine(void *arg) +{ + struct patch_machine_info *pmi = arg; + int cpu = smp_processor_id(); + int ret = 0; + + if (pmi->cpu == cpu) { + while (arch_atomic_read(&pmi->active)) + cpu_relax(); + ret = pmi->func(pmi->arg); + pmi->done = 1; + } else { + arch_atomic_dec(&pmi->active); + while (!pmi->done) + cpu_relax(); + isb(); + } + + return ret; +} + +/* + * Run a code patching function on a single CPU, ensuring that no CPUs are + * concurrently executing code being patched. + */ +int patch_machine_cpuslocked(patch_machine_func_t func, void *arg) +{ + struct patch_machine_info pmi = { + .func = func, + .arg = arg, + .cpu = raw_smp_processor_id(), + .active = ATOMIC_INIT(num_online_cpus() - 1), + .done = 0, + }; + + return stop_machine_cpuslocked(do_patch_machine, &pmi, cpu_online_mask); +} + +int patch_machine(patch_machine_func_t func, void *arg) +{ + int ret; + + cpus_read_lock(); + ret = patch_machine_cpuslocked(func, arg); + cpus_read_unlock(); + + return ret; +} + struct aarch64_insn_patch { void **text_addrs; u32 *new_insns; int insn_cnt; - atomic_t cpu_count; }; static int __kprobes aarch64_insn_patch_text_cb(void *arg) { int i, ret = 0; struct aarch64_insn_patch *pp = arg; - int num_cpus = num_online_cpus(); - - /* The last CPU becomes master */ - if (arch_atomic_inc_return(&pp->cpu_count) == num_cpus) { - for (i = 0; ret == 0 && i < pp->insn_cnt; i++) - ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i], - pp->new_insns[i]); - /* Notify other processors with an additional increment. */ - atomic_inc(&pp->cpu_count); - } else { - while (arch_atomic_read(&pp->cpu_count) <= num_cpus) - cpu_relax(); - isb(); - } + + for (i = 0; ret == 0 && i < pp->insn_cnt; i++) + ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i], + pp->new_insns[i]); return ret; } @@ -140,12 +197,10 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt) .text_addrs = addrs, .new_insns = insns, .insn_cnt = cnt, - .cpu_count = ATOMIC_INIT(0), }; if (cnt <= 0) return -EINVAL; - return stop_machine_cpuslocked(aarch64_insn_patch_text_cb, &patch, - cpu_online_mask); + return patch_machine_cpuslocked(aarch64_insn_patch_text_cb, &patch); } -- 2.30.2 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel