From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F016AC433F5 for ; Tue, 14 Dec 2021 16:03:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=u67o/WKIaU8JRnYtPDj8EG7Zub9OgAQYocCxLAWgUcE=; b=K0WejuN8zUMrWp qbv38BnIR2JnUr/gK7GIaX6fmqF7BPDlEwyAKlfOJw/aP4ep4O8ROdgInPsiPXN9OtYtyuL9QxnJ+ W5RIWdmDhkoFh+v+l56WCdZCwiRaDqrwCl4F/lWAmb9kIObQpmaS2vOWGZfG/PZ8tmdnGgXO50GdW L95CZFSJqbkhAn3bHleoRMklIOJ9U076QGMGkrKANpSkB4bgJS+fTWcql1Gu9o4jxcjJBAFGAqTa8 kaB9K3MvaLZzfKvNtZ0La9HGhikb5kXSchyAEHHzq2YoquddwJPrt5Qbt/WdzefCITM1MnKYLyS2w JZFVc8zMmPQFPS9o44yg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxAFJ-00Elvu-H9; Tue, 14 Dec 2021 16:01:41 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxAFG-00Elv3-0y for linux-arm-kernel@lists.infradead.org; Tue, 14 Dec 2021 16:01:39 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 873F261578; Tue, 14 Dec 2021 16:01:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 676E3C34601; Tue, 14 Dec 2021 16:01:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1639497697; bh=6g7nzkO3PB2Yk+og9s7RV6W7okDMj4NPRuDLNi/DwTc=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=rOlNGE1muH2HuaNhzVXpwwy0z+yxwf3Ep5xXXh4LoyOg8jl/cOac3T7YceKC2vjYw /gIv62Ymt9NOxeXcVpCNbmL2R9WKHO8VsvaxYqWEQ+aaf3yaGB2cC8YqDZhrGJH5FY RY0D+AnPsRUk/33uR89IzgIIMpWNlMVXSpnCZ3UNKTk+xd3tW0pL5cnID4J2S6AeCd RqvYN83yi2318MqpREBUr8DYMwDmKX57PE1bAbmvcmVTVL94KgDCnf5iP/9JELRzUc 7y/XeCQL7nwDJ2npZzinQt0BbdzI6X64Zg/Y8h7hVwIvHRF/g9UUyeR7AuhLK0/EsO 450k0ZCUfXOoQ== Date: Tue, 14 Dec 2021 16:01:31 +0000 From: Will Deacon To: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org, andre.przywara@arm.com, ardb@kernel.org, catalin.marinas@arm.com, james.morse@arm.com, joey.gouly@arm.com, suzuki.poulose@arm.com Subject: Re: [PATCH 1/4] arm64: alternative: wait for other CPUs before patching Message-ID: <20211214160131.GA15635@willie-the-truck> References: <20211203104723.3412383-1-mark.rutland@arm.com> <20211203104723.3412383-2-mark.rutland@arm.com> <20211213133152.GB11570@willie-the-truck> <20211213134145.GC11570@willie-the-truck> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211214_080138_172499_1688CC5B X-CRM114-Status: GOOD ( 43.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Dec 13, 2021 at 01:54:39PM +0000, Mark Rutland wrote: > On Mon, Dec 13, 2021 at 01:41:46PM +0000, Will Deacon wrote: > > On Mon, Dec 13, 2021 at 01:31:52PM +0000, Will Deacon wrote: > > > On Fri, Dec 03, 2021 at 10:47:20AM +0000, Mark Rutland wrote: > > > > In __apply_alternatives_multi_stop() we have a "really simple polling > > > > protocol" to avoid patching code that is concurrently executed on other > > > > CPUs. Secondary CPUs wait for the boot CPU to signal that patching is > > > > complete, but the boot CPU doesn't wait for secondaries to enter the > > > > polling loop, and it's possible that patching starts while secondaries > > > > are still within the stop_machine logic. > > > > > > > > Let's fix this by adding a vaguely simple polling protocol where the > > > > boot CPU waits for secondaries to signal that they have entered the > > > > unpatchable stop function. We can use the arch_atomic_*() functions for > > > > this, as they are not patched with alternatives. > > > > > > > > At the same time, let's make `all_alternatives_applied` local to > > > > __apply_alternatives_multi_stop(), since it is only used there, and this > > > > makes the code a little clearer. > > > > > > > > Signed-off-by: Mark Rutland > > > > Cc: Andre Przywara > > > > Cc: Ard Biesheuvel > > > > Cc: Catalin Marinas > > > > Cc: James Morse > > > > Cc: Joey Gouly > > > > Cc: Suzuki K Poulose > > > > Cc: Will Deacon > > > > --- > > > > arch/arm64/kernel/alternative.c | 17 ++++++++++++----- > > > > 1 file changed, 12 insertions(+), 5 deletions(-) > > > > > > > > diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c > > > > index 3fb79b76e9d9..4f32d4425aac 100644 > > > > --- a/arch/arm64/kernel/alternative.c > > > > +++ b/arch/arm64/kernel/alternative.c > > > > @@ -21,9 +21,6 @@ > > > > #define ALT_ORIG_PTR(a) __ALT_PTR(a, orig_offset) > > > > #define ALT_REPL_PTR(a) __ALT_PTR(a, alt_offset) > > > > > > > > -/* Volatile, as we may be patching the guts of READ_ONCE() */ > > > > -static volatile int all_alternatives_applied; > > > > - > > > > static DECLARE_BITMAP(applied_alternatives, ARM64_NCAPS); > > > > > > > > struct alt_region { > > > > @@ -193,11 +190,17 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu > > > > } > > > > > > > > /* > > > > - * We might be patching the stop_machine state machine, so implement a > > > > - * really simple polling protocol here. > > > > + * Apply alternatives, ensuring that no CPUs are concurrently executing code > > > > + * being patched. > > > > + * > > > > + * We might be patching the stop_machine state machine or READ_ONCE(), so > > > > + * we implement a simple polling protocol. > > > > */ > > > > static int __apply_alternatives_multi_stop(void *unused) > > > > { > > > > + /* Volatile, as we may be patching the guts of READ_ONCE() */ > > > > + static volatile int all_alternatives_applied; > > > > + static atomic_t stopped_cpus = ATOMIC_INIT(0); > > > > struct alt_region region = { > > > > .begin = (struct alt_instr *)__alt_instructions, > > > > .end = (struct alt_instr *)__alt_instructions_end, > > > > @@ -205,12 +208,16 @@ static int __apply_alternatives_multi_stop(void *unused) > > > > > > > > /* We always have a CPU 0 at this point (__init) */ > > > > if (smp_processor_id()) { > > > > + arch_atomic_inc(&stopped_cpus); > > > > > > Why can't we use normal atomic_inc() here? > > > > Ah, ok, this is to deal with instrumentation and you add 'noinstr' when you > > factor this out later on. It does, however, mean that we need to be really > > careful with this because we're relying on (a) our atomics patching using > > static keys and (b) static key patching not requiring stop_machine(). > > > > In particular, we cannot backport this to kernels where the atomics were > > patched directly. > > Another option here would be to use the __ll_sc_*() atomics directly, which at > least will break the build if backported too far? Hopefully it's sufficient just to add the right Fixes: tag and stick the kernel version on the CC stable line. > > > > while (!all_alternatives_applied) > > > > cpu_relax(); > > > > isb(); > > > > } else { > > > > DECLARE_BITMAP(remaining_capabilities, ARM64_NPATCHABLE); > > > > > > > > + while (arch_atomic_read(&stopped_cpus) != num_online_cpus() - 1) > > > > > > and normal atomic_read() here? > > > > This one I'm still thinking doesn't need the arch_ prefix. > > We could use a regular atomic_read() here, yes. > > I'd used the arch_atomic_*() from for consistency with the inc(). I'd rather only use the arch_* forms where they are strictly needed, and have a comment justifying each use. Will _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel