From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C9B1C04A6A for ; Thu, 10 Aug 2023 18:38:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235566AbjHJSi4 (ORCPT ); Thu, 10 Aug 2023 14:38:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235893AbjHJSiW (ORCPT ); Thu, 10 Aug 2023 14:38:22 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5AB4530C7 for ; Thu, 10 Aug 2023 11:38:06 -0700 (PDT) Message-ID: <20230810160806.103791682@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691692676; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=/mf0toaJNTepRqxmdRBH+YrhPsSto1ib/DfMaTt6vf0=; b=UAjkdYMii7C+JL70axRtF22S7GqkjgMyjVYk1NPlMMDWj2yIVVY+btcQ+PGNZg8JOQ+dpF Gc28czrsf5hMuIDheHQbIKpN/JtG7VUX5TW3VjohEIkbOoZv3s3KdoSml65VOSVHy9A2uy pHZRDoZSgHh3vzz3a1A1AtwK8ykOAhA3XaMpKwpXGBt4upAjtSkwIAB98kjdeqtWRXzhSU Jv6Jn80Dr0V+JuXKopLzLCE0112K+QELqvTEeMEbWfMC+vlLcphegZ7+xy463N4XQ5P4lZ 8oHyUdgDR2M+zf9pUBkkRkOspMAenADadGWXTW+AHHeLzweP2HFbbfGl+iyZ7Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691692676; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=/mf0toaJNTepRqxmdRBH+YrhPsSto1ib/DfMaTt6vf0=; b=DVrOz3cOohcHSque6Fewmr1QsvJQ9sfVGQbHSzfV/ZaoqAFGZ3DHnD8MSZg2zXg/jatNPu Qs7Vaudgzrxm5kAg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven Subject: [patch 20/30] x86/microcode: Sanitize __wait_for_cpus() References: <20230810153317.850017756@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Date: Thu, 10 Aug 2023 20:37:55 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Thomas Gleixner The code is too complicated for no reason: - The return value is pointless as this is a strict boolean. - It's way simpler to count down from num_online_cpus() and check for zero. - The timeout argument is pointless as this is always one second. - Touching the NMI watchdog every 100ns does not make any sense, neither does checking every 100ns. This is really not a hotpath operation. Preload the atomic counter with the number of online CPUs and simplify the whole timeout logic. Delay for one microsecond and touch the NMI watchdog once per millisecond. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/core.c | 41 ++++++++++++++--------------------- 1 file changed, 17 insertions(+), 24 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -324,31 +324,24 @@ static struct platform_device *microcode * requirement can be relaxed in the future. Right now, this is conservative * and good. */ -#define SPINUNIT 100 /* 100 nsec */ +static atomic_t late_cpus_in, late_cpus_out; - -static atomic_t late_cpus_in; -static atomic_t late_cpus_out; - -static int __wait_for_cpus(atomic_t *t, long long timeout) +static bool wait_for_cpus(atomic_t *cnt) { - int all_cpus = num_online_cpus(); - - atomic_inc(t); - - while (atomic_read(t) < all_cpus) { - if (timeout < SPINUNIT) { - pr_err("Timeout while waiting for CPUs rendezvous, remaining: %d\n", - all_cpus - atomic_read(t)); - return 1; - } + unsigned int timeout; - ndelay(SPINUNIT); - timeout -= SPINUNIT; + WARN_ON_ONCE(atomic_dec_return(cnt) < 0); - touch_nmi_watchdog(); + for (timeout = 0; timeout < USEC_PER_SEC; timeout++) { + if (!atomic_read(cnt)) + return true; + udelay(1); + if (!(timeout % 1000)) + touch_nmi_watchdog(); } - return 0; + /* Prevent the late comers to make progress and let them time out */ + atomic_inc(cnt); + return false; } /* @@ -366,7 +359,7 @@ static int __reload_late(void *info) * Wait for all CPUs to arrive. A load will not be attempted unless all * CPUs show up. * */ - if (__wait_for_cpus(&late_cpus_in, NSEC_PER_SEC)) + if (!wait_for_cpus(&late_cpus_in)) return -1; /* @@ -389,7 +382,7 @@ static int __reload_late(void *info) } wait_for_siblings: - if (__wait_for_cpus(&late_cpus_out, NSEC_PER_SEC)) + if (!wait_for_cpus(&late_cpus_out)) panic("Timeout during microcode update!\n"); /* @@ -416,8 +409,8 @@ static int microcode_reload_late(void) pr_err("Attempting late microcode loading - it is dangerous and taints the kernel.\n"); pr_err("You should switch to early loading, if possible.\n"); - atomic_set(&late_cpus_in, 0); - atomic_set(&late_cpus_out, 0); + atomic_set(&late_cpus_in, num_online_cpus()); + atomic_set(&late_cpus_out, num_online_cpus()); /* * Take a snapshot before the microcode update in order to compare and