From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.9 required=3.0 tests=BAYES_00,DKIM_ADSP_ALL, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 387DDC433E0 for ; Mon, 1 Feb 2021 10:42:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0724564E94 for ; Mon, 1 Feb 2021 10:42:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233401AbhBAKmJ (ORCPT ); Mon, 1 Feb 2021 05:42:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233277AbhBAKjs (ORCPT ); Mon, 1 Feb 2021 05:39:48 -0500 Received: from merlin.infradead.org (merlin.infradead.org [IPv6:2001:8b0:10b:1231::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C7C3C061573 for ; Mon, 1 Feb 2021 02:38:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=trYzs/ESR1XZON9909QxNZEJY/L5KzMtXiUsqpnAu88=; b=NLE8KsFFItWd85yq68uJ6cS8gk Ba29ays3Jjcj/OVgX3wjmF4ogh0gwRPaXnoXXsCziqVEpw5Kg2a2kPXiba38+ntWaE1GDNjXhHiRA 656ACdXIhmo88cRat05L+96B05zvkzxQkHAGgtYfjrug4JFeUT9iNOcKlm2x7xnTEytiMpZihkYuR 885JjSi9Xyy6TiOjyezdanbZdi15zcycaOzDBRKg9uIacSbguZALTpXFqfc9wxwYqx/coiD17Arez fS+HecLMN0+jLzbu+Hb7ZyTuDs8Qiw3pzFKNIRM3cVgfMWGqxUvAY/gUeyhjOEqSJQTkU6fPcY/sZ YBm9fgcA==; Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92]) by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1l6WbM-000285-C3; Mon, 01 Feb 2021 10:38:36 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1l6WbL-004NPX-6q; Mon, 01 Feb 2021 10:38:35 +0000 From: David Woodhouse To: Thomas Gleixner Cc: Andy Lutomirski , "shenkai (D)" , mimoja@amazon.com, LKML , Ingo Molnar , Borislav Petkov , X86 ML , "H . Peter Anvin" , hewenliang4@huawei.com, hushiyuan@huawei.com, luolongjun@huawei.com, hejingxian@huawei.com Subject: [PATCH 4/6] x86/smpboot: Split up native_cpu_up into separate phases Date: Mon, 1 Feb 2021 10:38:33 +0000 Message-Id: <20210201103835.1043254-4-dwmw@amazon.co.uk> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210201103835.1043254-1-dwmw@amazon.co.uk> References: <478c56af540eaa47b8f452e51cb0c085f26db738.camel@infradead.org> <20210201103835.1043254-1-dwmw@amazon.co.uk> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: David Woodhouse X-SRS-Rewrite: SMTP reverse-path rewritten from by merlin.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are four logical parts to what native_cpu_up() does. First it actually wakes AP. Second, it waits for the AP to make it as far as wait_for_master_cpu() which sets that CPU's bit in cpu_initialized_mask, and sets the bit in cpu_callout_mask to let the AP proceed through cpu_init(). Then, it waits for the AP to finish cpu_init() and get as far as the smp_callin() call, which sets that CPU's bit in cpu_callin_mask. Finally, it does the TSC synchronization and waits for the AP to actually mark itself online in cpu_online_mask. This commit should have no behavioural change, but merely splits those phases out into separate functions so that future commits and make them happen in parallel for all APs. Signed-off-by: David Woodhouse --- arch/x86/kernel/smpboot.c | 136 +++++++++++++++++++++++--------------- 1 file changed, 82 insertions(+), 54 deletions(-) diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index bec0059d3d3d..649b8236309b 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -1039,9 +1039,7 @@ static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle, { /* start_ip had better be page-aligned! */ unsigned long start_ip = real_mode_header->trampoline_start; - unsigned long boot_error = 0; - unsigned long timeout; idle->thread.sp = (unsigned long)task_pt_regs(idle); early_gdt_descr.address = (unsigned long)get_cpu_gdt_rw(cpu); @@ -1094,55 +1092,70 @@ static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle, boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid, cpu0_nmi_registered); - if (!boot_error) { - /* - * Wait 10s total for first sign of life from AP - */ - boot_error = -1; - timeout = jiffies + 10*HZ; - while (time_before(jiffies, timeout)) { - if (cpumask_test_cpu(cpu, cpu_initialized_mask)) { - /* - * Tell AP to proceed with initialization - */ - cpumask_set_cpu(cpu, cpu_callout_mask); - boot_error = 0; - break; - } - schedule(); - } - } + return boot_error; +} - if (!boot_error) { - /* - * Wait till AP completes initial initialization - */ - while (!cpumask_test_cpu(cpu, cpu_callin_mask)) { - /* - * Allow other tasks to run while we wait for the - * AP to come online. This also gives a chance - * for the MTRR work(triggered by the AP coming online) - * to be completed in the stop machine context. - */ - schedule(); - } +static int do_wait_cpu_cpumask(unsigned int cpu, const struct cpumask *mask) +{ + unsigned long timeout; + + /* + * Wait up to 10s for the CPU to report in. + */ + timeout = jiffies + 10*HZ; + while (time_before(jiffies, timeout)) { + if (cpumask_test_cpu(cpu, mask)) + return 0; + + schedule(); } + return -1; +} - if (x86_platform.legacy.warm_reset) { - /* - * Cleanup possible dangling ends... - */ - smpboot_restore_warm_reset_vector(); +static int do_wait_cpu_initialized(unsigned int cpu) +{ + /* + * Wait for first sign of life from AP. + */ + if (do_wait_cpu_cpumask(cpu, cpu_initialized_mask)) + return -1; + + cpumask_set_cpu(cpu, cpu_callout_mask); + return 0; +} + +static int do_wait_cpu_callin(unsigned int cpu) +{ + /* + * Wait till AP completes initial initialization. + */ + return do_wait_cpu_cpumask(cpu, cpu_callin_mask); +} + +static int do_wait_cpu_online(unsigned int cpu) +{ + unsigned long flags; + + /* + * Check TSC synchronization with the AP (keep irqs disabled + * while doing so): + */ + local_irq_save(flags); + check_tsc_sync_source(cpu); + local_irq_restore(flags); + + while (!cpu_online(cpu)) { + cpu_relax(); + touch_nmi_watchdog(); } - return boot_error; + return 0; } -int native_cpu_up(unsigned int cpu, struct task_struct *tidle) +int do_cpu_up(unsigned int cpu, struct task_struct *tidle) { int apicid = apic->cpu_present_to_apicid(cpu); int cpu0_nmi_registered = 0; - unsigned long flags; int err, ret = 0; lockdep_assert_irqs_enabled(); @@ -1189,19 +1202,6 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle) goto unreg_nmi; } - /* - * Check TSC synchronization with the AP (keep irqs disabled - * while doing so): - */ - local_irq_save(flags); - check_tsc_sync_source(cpu); - local_irq_restore(flags); - - while (!cpu_online(cpu)) { - cpu_relax(); - touch_nmi_watchdog(); - } - unreg_nmi: /* * Clean up the nmi handler. Do this after the callin and callout sync @@ -1213,6 +1213,34 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle) return ret; } +int native_cpu_up(unsigned int cpu, struct task_struct *tidle) +{ + int ret; + + ret = do_cpu_up(cpu, tidle); + if (ret) + return ret; + + ret = do_wait_cpu_initialized(cpu); + if (ret) + return ret; + + ret = do_wait_cpu_callin(cpu); + if (ret) + return ret; + + ret = do_wait_cpu_online(cpu); + + if (x86_platform.legacy.warm_reset) { + /* + * Cleanup possible dangling ends... + */ + smpboot_restore_warm_reset_vector(); + } + + return ret; +} + /** * arch_disable_smp_support() - disables SMP support for x86 at runtime */ -- 2.29.2