From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0789FC433F5 for ; Mon, 13 Dec 2021 13:33:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Idz9hDZ57xA9nF1sG0PhCPmIYiJL/NrybxcC+vWIJpI=; b=INITKz1CHkokR/ L2eVAcPEM8Dhp3uwuVuYVBJZMjgTdJ0Y1KmC9bsUOQ83R4Yqt3pqWzMYB1HRQqrTfZQ7D/dsfz5z4 XN8Hs0EOKEv03xwsr/4XH9wNViFlUg+5ZiossW7fSCkCRLQhAgyvxFUTrpPeS9IOhlboIQGA+8eIB 7Ui0Q81FLYspyv/Ew0cYEntsQf7dmm0RLG2XtfBFUXhliVOQdoJhG1s2y+hscgs2We/35i7EYmEDY JimAuPKkxbln1hOtKfgkn2XxGgTIMB5Y5jP2Dq5Fz0uP7KjOZWwglEf0Ig3Yf4Tmw5qdmtMYFYU+P RTlZhCs5NNJAMeR7S0WQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwlR2-009hn9-9x; Mon, 13 Dec 2021 13:32:08 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwlQt-009hkM-Dp for linux-arm-kernel@lists.infradead.org; Mon, 13 Dec 2021 13:32:01 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 11FCBB80D78; Mon, 13 Dec 2021 13:31:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F13CAC34602; Mon, 13 Dec 2021 13:31:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1639402317; bh=vokoN6qLWo74LH47N1161RAENJpoLbe3ewa8Y4RUfno=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=oV1vHDuUIG29z11VYY0VzMwFSdicSG1VXoSKL4m5Rn2jClnAEfRZ1AWR73vQ8LDf2 HfZxerG2EAx7uPvSymjRcqEfYXcSRomotjRvZveDNj7bxlpYUeNBMVW9tYdWPiU5hO dqJC7lsHpS2ab+G/8bLem5iKOPh4U/D3R9Eqez7gGmBVYjyLu6ntifQKXP9q/ilNQX SE3+gZzOwOUGizQCZij2k3amLGBQYsw5yjPGnCC8POhm90Wfjl1mJ/Orq5gllsrqqR d6f2d3MnuRgpsxfTTonvoQYJxphNvS9+kUVh+RWjPtEVZdpAwlLGFE7HEMpfN+OhPL zKCN8HX/kdhfg== Date: Mon, 13 Dec 2021 13:31:52 +0000 From: Will Deacon To: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org, andre.przywara@arm.com, ardb@kernel.org, catalin.marinas@arm.com, james.morse@arm.com, joey.gouly@arm.com, suzuki.poulose@arm.com Subject: Re: [PATCH 1/4] arm64: alternative: wait for other CPUs before patching Message-ID: <20211213133152.GB11570@willie-the-truck> References: <20211203104723.3412383-1-mark.rutland@arm.com> <20211203104723.3412383-2-mark.rutland@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20211203104723.3412383-2-mark.rutland@arm.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211213_053159_773939_A5EA9A49 X-CRM114-Status: GOOD ( 29.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Dec 03, 2021 at 10:47:20AM +0000, Mark Rutland wrote: > In __apply_alternatives_multi_stop() we have a "really simple polling > protocol" to avoid patching code that is concurrently executed on other > CPUs. Secondary CPUs wait for the boot CPU to signal that patching is > complete, but the boot CPU doesn't wait for secondaries to enter the > polling loop, and it's possible that patching starts while secondaries > are still within the stop_machine logic. > > Let's fix this by adding a vaguely simple polling protocol where the > boot CPU waits for secondaries to signal that they have entered the > unpatchable stop function. We can use the arch_atomic_*() functions for > this, as they are not patched with alternatives. > > At the same time, let's make `all_alternatives_applied` local to > __apply_alternatives_multi_stop(), since it is only used there, and this > makes the code a little clearer. > > Signed-off-by: Mark Rutland > Cc: Andre Przywara > Cc: Ard Biesheuvel > Cc: Catalin Marinas > Cc: James Morse > Cc: Joey Gouly > Cc: Suzuki K Poulose > Cc: Will Deacon > --- > arch/arm64/kernel/alternative.c | 17 ++++++++++++----- > 1 file changed, 12 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c > index 3fb79b76e9d9..4f32d4425aac 100644 > --- a/arch/arm64/kernel/alternative.c > +++ b/arch/arm64/kernel/alternative.c > @@ -21,9 +21,6 @@ > #define ALT_ORIG_PTR(a) __ALT_PTR(a, orig_offset) > #define ALT_REPL_PTR(a) __ALT_PTR(a, alt_offset) > > -/* Volatile, as we may be patching the guts of READ_ONCE() */ > -static volatile int all_alternatives_applied; > - > static DECLARE_BITMAP(applied_alternatives, ARM64_NCAPS); > > struct alt_region { > @@ -193,11 +190,17 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu > } > > /* > - * We might be patching the stop_machine state machine, so implement a > - * really simple polling protocol here. > + * Apply alternatives, ensuring that no CPUs are concurrently executing code > + * being patched. > + * > + * We might be patching the stop_machine state machine or READ_ONCE(), so > + * we implement a simple polling protocol. > */ > static int __apply_alternatives_multi_stop(void *unused) > { > + /* Volatile, as we may be patching the guts of READ_ONCE() */ > + static volatile int all_alternatives_applied; > + static atomic_t stopped_cpus = ATOMIC_INIT(0); > struct alt_region region = { > .begin = (struct alt_instr *)__alt_instructions, > .end = (struct alt_instr *)__alt_instructions_end, > @@ -205,12 +208,16 @@ static int __apply_alternatives_multi_stop(void *unused) > > /* We always have a CPU 0 at this point (__init) */ > if (smp_processor_id()) { > + arch_atomic_inc(&stopped_cpus); Why can't we use normal atomic_inc() here? > while (!all_alternatives_applied) > cpu_relax(); > isb(); > } else { > DECLARE_BITMAP(remaining_capabilities, ARM64_NPATCHABLE); > > + while (arch_atomic_read(&stopped_cpus) != num_online_cpus() - 1) and normal atomic_read() here? Will _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel