From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A433FC7EE2A for ; Mon, 8 May 2023 19:44:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233790AbjEHTop (ORCPT ); Mon, 8 May 2023 15:44:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233792AbjEHToM (ORCPT ); Mon, 8 May 2023 15:44:12 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E850D7286; Mon, 8 May 2023 12:43:49 -0700 (PDT) Message-ID: <20230508185217.956149661@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1683575028; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Wg+c42iei99IoWm1nks371E2WLpclL3rCuBzgiDxS/4=; b=hGH5Q6f/HoQLGtMGK7WJz/A7zrmv2hn0bYeL983aivgiXf4v9b+ajqaNl5i7LW/QDWn3Yv CjzyVuJhnnT2MTQ6URv7UGI69UsvsBWG1zH7dPI2EwrDbRFg8OckIikBMyGhGBovnfWD8V XmUIz/yp7zubm8zlGMWbg7fj6ZqT5JLEYrr3wv5vcieD7M3QALS6GCdBf3ROCwVWVU38hB NOEIIr0t8m3I9Gdi/vpb2zh2U9a/Xkq4UfM9fwUrsg+hKlBQKHaw4qrKw8pPxIK+qO4RFh x8liDY4krDFiCCbp6VzznQNqaXHfRG8xHd/bDSpXekLIHcAgYb9cNaeMexc6Vg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1683575028; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Wg+c42iei99IoWm1nks371E2WLpclL3rCuBzgiDxS/4=; b=XGQUrkXzd8odT1L5UrSSf3572AUI0YZOsKhlh13Qy6i38K6kH4yV2eXaFVhbV5nn1D4aBF GzlCkIWzf1cwWLBg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, David Woodhouse , Andrew Cooper , Brian Gerst , Arjan van de Veen , Paolo Bonzini , Paul McKenney , Tom Lendacky , Sean Christopherson , Oleksandr Natalenko , Paul Menzel , "Guilherme G. Piccoli" , Piotr Gorski , Usama Arif , Juergen Gross , Boris Ostrovsky , xen-devel@lists.xenproject.org, Russell King , Arnd Bergmann , linux-arm-kernel@lists.infradead.org, Catalin Marinas , Will Deacon , Guo Ren , linux-csky@vger.kernel.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, "James E.J. Bottomley" , Helge Deller , linux-parisc@vger.kernel.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Mark Rutland , Sabin Rapan , "Michael Kelley (LINUX)" Subject: [patch v3 13/36] x86/smpboot: Remove cpu_callin_mask References: <20230508181633.089804905@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Date: Mon, 8 May 2023 21:43:47 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org From: Thomas Gleixner Now that TSC synchronization is SMP function call based there is no reason to wait for the AP to be set in smp_callin_mask. The control CPU waits for the AP to set itself in the online mask anyway. Signed-off-by: Thomas Gleixner Tested-by: Michael Kelley --- arch/x86/kernel/smpboot.c | 61 +++++++--------------------------------------- 1 file changed, 10 insertions(+), 51 deletions(-) --- --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -104,7 +104,6 @@ EXPORT_PER_CPU_SYMBOL(cpu_info); /* All of these masks are initialized in setup_cpu_local_masks() */ static cpumask_var_t cpu_initialized_mask; static cpumask_var_t cpu_callout_mask; -static cpumask_var_t cpu_callin_mask; /* Representing CPUs for which sibling maps can be computed */ static cpumask_var_t cpu_sibling_setup_mask; @@ -167,21 +166,16 @@ static inline void smpboot_restore_warm_ */ static void smp_callin(void) { - int cpuid; + int cpuid = smp_processor_id(); /* * If waken up by an INIT in an 82489DX configuration - * cpu_callout_mask guarantees we don't get here before - * an INIT_deassert IPI reaches our local APIC, so it is - * now safe to touch our local APIC. - */ - cpuid = smp_processor_id(); - - /* - * the boot CPU has finished the init stage and is spinning - * on callin_map until we finish. We are free to set up this - * CPU, first the APIC. (this is probably redundant on most - * boards) + * cpu_callout_mask guarantees we don't get here before an + * INIT_deassert IPI reaches our local APIC, so it is now safe to + * touch our local APIC. + * + * Set up this CPU, first the APIC, which is probably redundant on + * most boards. */ apic_ap_setup(); @@ -192,7 +186,7 @@ static void smp_callin(void) * The topology information must be up to date before * calibrate_delay() and notify_cpu_starting(). */ - set_cpu_sibling_map(raw_smp_processor_id()); + set_cpu_sibling_map(cpuid); ap_init_aperfmperf(); @@ -205,11 +199,6 @@ static void smp_callin(void) * state (CPUHP_ONLINE in the case of serial bringup). */ notify_cpu_starting(cpuid); - - /* - * Allow the master to continue. - */ - cpumask_set_cpu(cpuid, cpu_callin_mask); } static void ap_calibrate_delay(void) @@ -268,11 +257,6 @@ static void notrace start_secondary(void rcu_cpu_starting(raw_smp_processor_id()); x86_cpuinit.early_percpu_clock_init(); - /* - * Sync point with wait_cpu_callin(). The AP doesn't wait here - * but just sets the bit to let the controlling CPU (BSP) know that - * it's got this far. - */ smp_callin(); /* Otherwise gcc will move up smp_processor_id() before cpu_init() */ @@ -1112,7 +1096,7 @@ static int wait_cpu_cpumask(unsigned int * and thus wait_for_master_cpu(), then set cpu_callout_mask to allow it * to proceed. The AP will then proceed past setting its 'callin' bit * and end up waiting in check_tsc_sync_target() until we reach - * do_wait_cpu_online() to tend to it. + * wait_cpu_online() to tend to it. */ static int wait_cpu_initialized(unsigned int cpu) { @@ -1127,20 +1111,7 @@ static int wait_cpu_initialized(unsigned } /* - * Bringup step three: Wait for the target AP to reach smp_callin(). - * The AP is not waiting for us here so we don't need to parallelise - * this step. Not entirely clear why we care about this, since we just - * proceed directly to TSC synchronization which is the next sync - * point with the AP anyway. - */ -static void wait_cpu_callin(unsigned int cpu) -{ - while (!cpumask_test_cpu(cpu, cpu_callin_mask)) - schedule(); -} - -/* - * Bringup step four: Wait for the target AP to reach set_cpu_online() in + * Bringup step three: Wait for the target AP to reach set_cpu_online() in * start_secondary(). */ static void wait_cpu_online(unsigned int cpu) @@ -1170,14 +1141,6 @@ static int native_kick_ap(unsigned int c } /* - * Already booted CPU? - */ - if (cpumask_test_cpu(cpu, cpu_callin_mask)) { - pr_debug("do_boot_cpu %d Already started\n", cpu); - return -ENOSYS; - } - - /* * Save current MTRR state in case it was changed since early boot * (e.g. by the ACPI SMI) to initialize new CPUs with MTRRs in sync: */ @@ -1214,7 +1177,6 @@ int native_cpu_up(unsigned int cpu, stru if (ret) goto out; - wait_cpu_callin(cpu); wait_cpu_online(cpu); out: @@ -1330,7 +1292,6 @@ void __init smp_prepare_cpus_common(void * Setup boot CPU information */ smp_store_boot_cpu_info(); /* Final full version of the data */ - cpumask_copy(cpu_callin_mask, cpumask_of(0)); mb(); for_each_possible_cpu(i) { @@ -1545,7 +1506,6 @@ early_param("possible_cpus", _setup_poss void __init setup_cpu_local_masks(void) { alloc_bootmem_cpumask_var(&cpu_initialized_mask); - alloc_bootmem_cpumask_var(&cpu_callin_mask); alloc_bootmem_cpumask_var(&cpu_callout_mask); alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask); } @@ -1609,7 +1569,6 @@ static void remove_cpu_from_maps(int cpu { set_cpu_online(cpu, false); cpumask_clear_cpu(cpu, cpu_callout_mask); - cpumask_clear_cpu(cpu, cpu_callin_mask); /* was set by cpu_init() */ cpumask_clear_cpu(cpu, cpu_initialized_mask); numa_remove_cpu(cpu);