From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 26B12C433F5 for ; Mon, 13 Dec 2021 13:50:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zxH8kpwvbCzEu132/OIc+B75vONWmtBpednMzqqxSHM=; b=sf9qsJsIfNgNwu 8AgBIpXHlDXhjtujCoeXqVQsoU/6ash4xI7SGPopcSdMqkjKHqmWuBHWhf6b/SWAcQLqySWivcmim 6mKh0hDlIb67AmWBYvTpP9Mvvty9i075A1EunqGoCxc7ByISH+iY445pVXA0Jo0Js+utlkcH1gvzt BIRBnxWuaDxvGFyahz9qUGI/lMYLNPxvhc6RHr5/zFCIRLYkwMfdA0b5SRVVFDfJIbMGWuORj7bdi kyybstoZ1YMNwI0X8mJ1KzyZDA8bAsPumbgxhiTXIhAQE4FYkCBREylyvMFBeAWplJEw8SpH/czi+ mtjSMuBkThzJvinRLEwQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwlhg-009n6F-FM; Mon, 13 Dec 2021 13:49:20 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwlhZ-009n3t-3n for linux-arm-kernel@lists.infradead.org; Mon, 13 Dec 2021 13:49:17 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7F1A31FB; Mon, 13 Dec 2021 05:49:11 -0800 (PST) Received: from FVFF77S0Q05N (unknown [10.57.67.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 27D153F73B; Mon, 13 Dec 2021 05:49:10 -0800 (PST) Date: Mon, 13 Dec 2021 13:49:07 +0000 From: Mark Rutland To: Will Deacon Cc: linux-arm-kernel@lists.infradead.org, andre.przywara@arm.com, ardb@kernel.org, catalin.marinas@arm.com, james.morse@arm.com, joey.gouly@arm.com, suzuki.poulose@arm.com Subject: Re: [PATCH 1/4] arm64: alternative: wait for other CPUs before patching Message-ID: References: <20211203104723.3412383-1-mark.rutland@arm.com> <20211203104723.3412383-2-mark.rutland@arm.com> <20211213133152.GB11570@willie-the-truck> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20211213133152.GB11570@willie-the-truck> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211213_054913_280419_4572F953 X-CRM114-Status: GOOD ( 34.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Dec 13, 2021 at 01:31:52PM +0000, Will Deacon wrote: > On Fri, Dec 03, 2021 at 10:47:20AM +0000, Mark Rutland wrote: > > In __apply_alternatives_multi_stop() we have a "really simple polling > > protocol" to avoid patching code that is concurrently executed on other > > CPUs. Secondary CPUs wait for the boot CPU to signal that patching is > > complete, but the boot CPU doesn't wait for secondaries to enter the > > polling loop, and it's possible that patching starts while secondaries > > are still within the stop_machine logic. > > > > Let's fix this by adding a vaguely simple polling protocol where the > > boot CPU waits for secondaries to signal that they have entered the > > unpatchable stop function. We can use the arch_atomic_*() functions for > > this, as they are not patched with alternatives. > > > > At the same time, let's make `all_alternatives_applied` local to > > __apply_alternatives_multi_stop(), since it is only used there, and this > > makes the code a little clearer. > > > > Signed-off-by: Mark Rutland > > Cc: Andre Przywara > > Cc: Ard Biesheuvel > > Cc: Catalin Marinas > > Cc: James Morse > > Cc: Joey Gouly > > Cc: Suzuki K Poulose > > Cc: Will Deacon > > --- > > arch/arm64/kernel/alternative.c | 17 ++++++++++++----- > > 1 file changed, 12 insertions(+), 5 deletions(-) > > > > diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c > > index 3fb79b76e9d9..4f32d4425aac 100644 > > --- a/arch/arm64/kernel/alternative.c > > +++ b/arch/arm64/kernel/alternative.c > > @@ -21,9 +21,6 @@ > > #define ALT_ORIG_PTR(a) __ALT_PTR(a, orig_offset) > > #define ALT_REPL_PTR(a) __ALT_PTR(a, alt_offset) > > > > -/* Volatile, as we may be patching the guts of READ_ONCE() */ > > -static volatile int all_alternatives_applied; > > - > > static DECLARE_BITMAP(applied_alternatives, ARM64_NCAPS); > > > > struct alt_region { > > @@ -193,11 +190,17 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu > > } > > > > /* > > - * We might be patching the stop_machine state machine, so implement a > > - * really simple polling protocol here. > > + * Apply alternatives, ensuring that no CPUs are concurrently executing code > > + * being patched. > > + * > > + * We might be patching the stop_machine state machine or READ_ONCE(), so > > + * we implement a simple polling protocol. > > */ > > static int __apply_alternatives_multi_stop(void *unused) > > { > > + /* Volatile, as we may be patching the guts of READ_ONCE() */ > > + static volatile int all_alternatives_applied; > > + static atomic_t stopped_cpus = ATOMIC_INIT(0); > > struct alt_region region = { > > .begin = (struct alt_instr *)__alt_instructions, > > .end = (struct alt_instr *)__alt_instructions_end, > > @@ -205,12 +208,16 @@ static int __apply_alternatives_multi_stop(void *unused) > > > > /* We always have a CPU 0 at this point (__init) */ > > if (smp_processor_id()) { > > + arch_atomic_inc(&stopped_cpus); > > Why can't we use normal atomic_inc() here? In case there's any explicit instrumentation enabled in the atomic_inc() wrapper, since the instrumentation code may call into patchable code. Today we'd get away with using atomic_inc(), since currently all the instrumentation happens to be prior to the actual AMO, but generally to avoid instrumentation we're supposed to use the arch_atomic_*() ops. There are some other latent issues with calling into instrumentable code here, which I plan to address in future patches, so if you want I can make this a regular atomic_inc() for now and tackle that as a separate problem. Otherwise, I can elaborate on the mention in the commit message to make that clearer. > > while (!all_alternatives_applied) > > cpu_relax(); > > isb(); > > } else { > > DECLARE_BITMAP(remaining_capabilities, ARM64_NPATCHABLE); > > > > + while (arch_atomic_read(&stopped_cpus) != num_online_cpus() - 1) > > and normal atomic_read() here? Same story as above. Thanks, Mark. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel