linux-parisc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org, David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>,
	Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>,
	linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 36/36] x86/smpboot/64: Implement arch_cpuhp_init_parallel_bringup() and enable it
Date: Mon,  8 May 2023 21:44:25 +0200 (CEST)	[thread overview]
Message-ID: <20230508185219.230287961@linutronix.de> (raw)
In-Reply-To: 20230508181633.089804905@linutronix.de

From: Thomas Gleixner <tglx@linutronix.de>

Implement the validation function which tells the core code whether
parallel bringup is possible.

The only condition for now is that the kernel does not run in an encrypted
guest as these will trap the RDMSR via #VC, which cannot be handled at that
point in early startup.

There was an earlier variant for AMD-SEV which used the GHBC protocol for
retrieving the APIC ID via CPUID, but there is no guarantee that the
initial APIC ID in CPUID is the same as the real APIC ID. There is no
enforcement from the secure firmware and the hypervisor can assign APIC IDs
as it sees fit as long as the ACPI/MADT table is consistent with that
assignment.

Unfortunately there is no RDMSR GHCB protocol at the moment, so enabling
AMD-SEV guests for parallel startup needs some more thought.

Intel-TDX provides a secure RDMSR hypercall, but supporting that is outside
the scope of this change.

Fixup announce_cpu() as e.g. on Hyper-V CPU1 is the secondary sibling of
CPU0, which makes the @cpu == 1 logic in announce_cpu() fall apart.

[ mikelley: Reported the announce_cpu() fallout

Originally-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>

---
V2: Fixup announce_cpu() - Michael Kelley
V3: Fixup announce_cpu() for real - Michael Kelley
---
 arch/x86/Kconfig             |    3 -
 arch/x86/kernel/cpu/common.c |    6 --
 arch/x86/kernel/smpboot.c    |   87 +++++++++++++++++++++++++++++++++++--------
 3 files changed, 75 insertions(+), 21 deletions(-)
---

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -274,8 +274,9 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
+	select HOTPLUG_PARALLEL			if SMP && X86_64
 	select HOTPLUG_SMT			if SMP
-	select HOTPLUG_SPLIT_STARTUP		if SMP
+	select HOTPLUG_SPLIT_STARTUP		if SMP && X86_32
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
 	select NEED_PER_CPU_PAGE_FIRST_CHUNK
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2128,11 +2128,7 @@ static inline void setup_getcpu(int cpu)
 }
 
 #ifdef CONFIG_X86_64
-static inline void ucode_cpu_init(int cpu)
-{
-	if (cpu)
-		load_ucode_ap();
-}
+static inline void ucode_cpu_init(int cpu) { }
 
 static inline void tss_setup_ist(struct tss_struct *tss)
 {
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -58,6 +58,7 @@
 #include <linux/overflow.h>
 #include <linux/stackprotector.h>
 #include <linux/cpuhotplug.h>
+#include <linux/mc146818rtc.h>
 
 #include <asm/acpi.h>
 #include <asm/cacheinfo.h>
@@ -75,7 +76,7 @@
 #include <asm/fpu/api.h>
 #include <asm/setup.h>
 #include <asm/uv/uv.h>
-#include <linux/mc146818rtc.h>
+#include <asm/microcode.h>
 #include <asm/i8259.h>
 #include <asm/misc.h>
 #include <asm/qspinlock.h>
@@ -128,7 +129,6 @@ int arch_update_cpu_topology(void)
 	return retval;
 }
 
-
 static unsigned int smpboot_warm_reset_vector_count;
 
 static inline void smpboot_setup_warm_reset_vector(unsigned long start_eip)
@@ -229,16 +229,43 @@ static void notrace start_secondary(void
 	 */
 	cr4_init();
 
-#ifdef CONFIG_X86_32
-	/* switch away from the initial page table */
-	load_cr3(swapper_pg_dir);
-	__flush_tlb_all();
-#endif
+	/*
+	 * 32-bit specific. 64-bit reaches this code with the correct page
+	 * table established. Yet another historical divergence.
+	 */
+	if (IS_ENABLED(CONFIG_X86_32)) {
+		/* switch away from the initial page table */
+		load_cr3(swapper_pg_dir);
+		__flush_tlb_all();
+	}
+
 	cpu_init_exception_handling();
 
 	/*
-	 * Synchronization point with the hotplug core. Sets the
-	 * synchronization state to ALIVE and waits for the control CPU to
+	 * 32-bit systems load the microcode from the ASM startup code for
+	 * historical reasons.
+	 *
+	 * On 64-bit systems load it before reaching the AP alive
+	 * synchronization point below so it is not part of the full per
+	 * CPU serialized bringup part when "parallel" bringup is enabled.
+	 *
+	 * That's even safe when hyperthreading is enabled in the CPU as
+	 * the core code starts the primary threads first and leaves the
+	 * secondary threads waiting for SIPI. Loading microcode on
+	 * physical cores concurrently is a safe operation.
+	 *
+	 * This covers both the Intel specific issue that concurrent
+	 * microcode loading on SMT siblings must be prohibited and the
+	 * vendor independent issue`that microcode loading which changes
+	 * CPUID, MSRs etc. must be strictly serialized to maintain
+	 * software state correctness.
+	 */
+	if (IS_ENABLED(CONFIG_X86_64))
+		load_ucode_ap();
+
+	/*
+	 * Synchronization point with the hotplug core. Sets this CPUs
+	 * synchronization state to ALIVE and spin-waits for the control CPU to
 	 * release this CPU for further bringup.
 	 */
 	cpuhp_ap_sync_alive();
@@ -924,9 +951,9 @@ static int wakeup_secondary_cpu_via_init
 /* reduce the number of lines printed when booting a large cpu count system */
 static void announce_cpu(int cpu, int apicid)
 {
+	static int width, node_width, first = 1;
 	static int current_node = NUMA_NO_NODE;
 	int node = early_cpu_to_node(cpu);
-	static int width, node_width;
 
 	if (!width)
 		width = num_digits(num_possible_cpus()) + 1; /* + '#' sign */
@@ -934,10 +961,10 @@ static void announce_cpu(int cpu, int ap
 	if (!node_width)
 		node_width = num_digits(num_possible_nodes()) + 1; /* + '#' */
 
-	if (cpu == 1)
-		printk(KERN_INFO "x86: Booting SMP configuration:\n");
-
 	if (system_state < SYSTEM_RUNNING) {
+		if (first)
+			pr_info("x86: Booting SMP configuration:\n");
+
 		if (node != current_node) {
 			if (current_node > (-1))
 				pr_cont("\n");
@@ -948,11 +975,11 @@ static void announce_cpu(int cpu, int ap
 		}
 
 		/* Add padding for the BSP */
-		if (cpu == 1)
+		if (first)
 			pr_cont("%*s", width + 1, " ");
+		first = 0;
 
 		pr_cont("%*s#%d", width - num_digits(cpu), " ", cpu);
-
 	} else
 		pr_info("Booting Node %d Processor %d APIC 0x%x\n",
 			node, cpu, apicid);
@@ -1242,6 +1269,36 @@ void __init smp_prepare_cpus_common(void
 	set_cpu_sibling_map(0);
 }
 
+#ifdef CONFIG_X86_64
+/* Establish whether parallel bringup can be supported. */
+bool __init arch_cpuhp_init_parallel_bringup(void)
+{
+	/*
+	 * Encrypted guests require special handling. They enforce X2APIC
+	 * mode but the RDMSR to read the APIC ID is intercepted and raises
+	 * #VC or #VE which cannot be handled in the early startup code.
+	 *
+	 * AMD-SEV does not provide a RDMSR GHCB protocol so the early
+	 * startup code cannot directly communicate with the secure
+	 * firmware. The alternative solution to retrieve the APIC ID via
+	 * CPUID(0xb), which is covered by the GHCB protocol, is not viable
+	 * either because there is no enforcement of the CPUID(0xb)
+	 * provided "initial" APIC ID to be the same as the real APIC ID.
+	 *
+	 * Intel-TDX has a secure RDMSR hypercall, but that needs to be
+	 * implemented seperately in the low level startup ASM code.
+	 */
+	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
+		pr_info("Parallel CPU startup disabled due to guest state encryption\n");
+		return false;
+	}
+
+	smpboot_control = STARTUP_READ_APICID;
+	pr_debug("Parallel CPU startup enabled: 0x%08x\n", smpboot_control);
+	return true;
+}
+#endif
+
 /*
  * Prepare for SMP bootup.
  * @max_cpus: configured maximum number of CPUs, It is a legacy parameter


      parent reply	other threads:[~2023-05-08 19:47 UTC|newest]

Thread overview: 89+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-08 19:43 [patch v3 00/36] cpu/hotplug, x86: Reworked parallel CPU bringup Thomas Gleixner
2023-05-08 19:43 ` [patch v3 01/36] [patch V2 01/38] x86/smpboot: Cleanup topology_phys_to_logical_pkg()/die() Thomas Gleixner
2023-05-08 19:43 ` [patch v3 02/36] cpu/hotplug: Mark arch_disable_smp_support() and bringup_nonboot_cpus() __init Thomas Gleixner
2023-05-08 19:43 ` [patch v3 03/36] x86/smpboot: Avoid pointless delay calibration if TSC is synchronized Thomas Gleixner
2023-05-08 19:43 ` [patch v3 04/36] x86/smpboot: Rename start_cpu0() to soft_restart_cpu() Thomas Gleixner
2023-05-08 19:43 ` [patch v3 05/36] x86/topology: Remove CPU0 hotplug option Thomas Gleixner
2023-05-08 19:43 ` [patch v3 06/36] x86/smpboot: Remove the CPU0 hotplug kludge Thomas Gleixner
2023-05-08 19:43 ` [patch v3 07/36] x86/smpboot: Restrict soft_restart_cpu() to SEV Thomas Gleixner
2023-05-08 19:43 ` [patch v3 08/36] x86/smpboot: Split up native_cpu_up() into separate phases and document them Thomas Gleixner
2023-05-09 10:04   ` Peter Zijlstra
2023-05-09 12:07     ` Thomas Gleixner
2023-05-09 17:59       ` Thomas Gleixner
2023-05-09 20:11     ` Thomas Gleixner
2023-05-10  8:39       ` Peter Zijlstra
2023-05-09 10:19   ` Peter Zijlstra
2023-05-09 12:08     ` Thomas Gleixner
2023-05-09 18:03     ` Thomas Gleixner
2023-05-09 10:31   ` Peter Zijlstra
2023-05-09 12:09     ` Thomas Gleixner
2023-05-08 19:43 ` [patch v3 09/36] x86/smpboot: Get rid of cpu_init_secondary() Thomas Gleixner
2023-05-08 19:43 ` [patch v3 10/36] [patch V2 10/38] x86/cpu/cacheinfo: Remove cpu_callout_mask dependency Thomas Gleixner
2023-05-08 19:43 ` [patch v3 11/36] [patch V2 11/38] x86/smpboot: Move synchronization masks to SMP boot code Thomas Gleixner
2023-05-08 19:43 ` [patch v3 12/36] [patch V2 12/38] x86/smpboot: Make TSC synchronization function call based Thomas Gleixner
2023-05-08 19:43 ` [patch v3 13/36] x86/smpboot: Remove cpu_callin_mask Thomas Gleixner
2023-05-09 10:49   ` Peter Zijlstra
2023-05-09 12:09     ` Thomas Gleixner
2023-05-08 19:43 ` [patch v3 14/36] [patch V2 14/38] cpu/hotplug: Rework sparse_irq locking in bringup_cpu() Thomas Gleixner
2023-05-09 11:02   ` Peter Zijlstra
2023-05-09 12:10     ` Thomas Gleixner
2023-05-08 19:43 ` [patch v3 15/36] x86/smpboot: Remove wait for cpu_online() Thomas Gleixner
2023-05-08 19:43 ` [patch v3 16/36] x86/xen/smp_pv: Remove wait for CPU online Thomas Gleixner
2023-05-08 19:43 ` [patch v3 17/36] x86/xen/hvm: Get rid of DEAD_FROZEN handling Thomas Gleixner
2023-05-08 19:43 ` [patch v3 18/36] [patch V2 18/38] cpu/hotplug: Add CPU state tracking and synchronization Thomas Gleixner
2023-05-09 11:07   ` Peter Zijlstra
2023-05-09 11:35     ` Peter Zijlstra
2023-05-09 12:12     ` Thomas Gleixner
2023-05-08 19:43 ` [patch v3 19/36] x86/smpboot: Switch to hotplug core state synchronization Thomas Gleixner
2023-05-08 19:43 ` [patch v3 20/36] cpu/hotplug: Remove cpu_report_state() and related unused cruft Thomas Gleixner
2023-05-08 19:44 ` [patch v3 21/36] [patch V2 21/38] ARM: smp: Switch to hotplug core state synchronization Thomas Gleixner
2023-05-08 19:44 ` [patch v3 22/36] arm64: " Thomas Gleixner
2023-05-08 19:44 ` [patch v3 23/36] [patch V2 23/38] csky/smp: " Thomas Gleixner
2023-05-08 19:44 ` [patch v3 24/36] [patch V2 24/38] MIPS: SMP_CPS: " Thomas Gleixner
2023-05-08 19:44 ` [patch v3 25/36] parisc: " Thomas Gleixner
2023-05-08 19:44 ` [patch v3 26/36] riscv: " Thomas Gleixner
2023-05-08 19:44 ` [patch v3 27/36] cpu/hotplug: Remove unused state functions Thomas Gleixner
2023-05-08 19:44 ` [patch v3 28/36] cpu/hotplug: Reset task stack state in _cpu_up() Thomas Gleixner
2023-05-08 19:44 ` [patch v3 29/36] [patch V2 29/38] cpu/hotplug: Provide a split up CPUHP_BRINGUP mechanism Thomas Gleixner
2023-05-08 19:44 ` [patch v3 30/36] x86/smpboot: Enable split CPU startup Thomas Gleixner
2023-05-08 19:44 ` [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask Thomas Gleixner
2023-05-24 20:48   ` Kirill A. Shutemov
2023-05-26 10:14     ` Thomas Gleixner
2023-05-27 13:40       ` Thomas Gleixner
2023-05-29  2:39         ` Kirill A. Shutemov
2023-05-29 19:27           ` Thomas Gleixner
2023-05-29 20:31             ` Kirill A. Shutemov
2023-05-30  0:54               ` Kirill A. Shutemov
2023-05-30  9:26                 ` Thomas Gleixner
2023-05-30 10:34                   ` Thomas Gleixner
2023-05-30 11:37                     ` Kirill A. Shutemov
2023-05-30 12:09                     ` [patch] x86/smpboot: Disable parallel bootup if cc_vendor != NONE Thomas Gleixner
2023-05-30 12:29                       ` Kirill A. Shutemov
2023-05-30 16:00                         ` Thomas Gleixner
2023-05-30 16:56                           ` Sean Christopherson
2023-05-30 19:51                             ` Thomas Gleixner
2023-05-30 20:03                               ` Tom Lendacky
2023-05-30 20:39                                 ` Thomas Gleixner
2023-05-30 21:13                                   ` Tom Lendacky
2023-05-31  7:44                                     ` [patch] x86/smpboot: Fix the parallel bringup decision Thomas Gleixner
2023-05-31 11:07                                       ` Kirill A. Shutemov
2023-05-31 13:58                                       ` Tom Lendacky
2023-05-30 17:02                           ` [patch] x86/smpboot: Disable parallel bootup if cc_vendor != NONE Kirill A. Shutemov
2023-05-30 17:31                             ` Sean Christopherson
2023-05-30  9:26               ` [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask Thomas Gleixner
2023-05-30 10:46               ` [patch] x86/realmode: Make stack lock work in trampoline_compat() Thomas Gleixner
2023-05-30 11:12                 ` Kirill A. Shutemov
2023-06-08 23:34                 ` Yunhong Jiang
2023-06-08 23:57                   ` Andrew Cooper
2023-06-09  0:22                     ` Yunhong Jiang
2023-06-10 19:50                     ` David Laight
2023-06-10 22:51                       ` 'Andrew Cooper'
2023-05-08 19:44 ` [patch v3 32/36] cpu/hotplug: Allow "parallel" bringup up to CPUHP_BP_KICK_AP_STATE Thomas Gleixner
2023-05-08 19:44 ` [patch v3 33/36] x86/apic: Save the APIC virtual base address Thomas Gleixner
2023-05-09  9:20   ` Sergey Shtylyov
2023-05-08 19:44 ` [patch v3 34/36] x86/smpboot: Implement a bit spinlock to protect the realmode stack Thomas Gleixner
2023-05-09 13:13   ` Peter Zijlstra
2023-05-09 13:47     ` Thomas Gleixner
2023-05-08 19:44 ` [patch v3 35/36] x86/smpboot: Support parallel startup of secondary CPUs Thomas Gleixner
2023-05-09 13:57   ` Peter Zijlstra
2023-05-08 19:44 ` Thomas Gleixner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230508185219.230287961@linutronix.de \
    --to=tglx@linutronix.de \
    --cc=James.Bottomley@HansenPartnership.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=arjan@linux.intel.com \
    --cc=arnd@arndb.de \
    --cc=boris.ostrovsky@oracle.com \
    --cc=brgerst@gmail.com \
    --cc=catalin.marinas@arm.com \
    --cc=deller@gmx.de \
    --cc=dwmw2@infradead.org \
    --cc=gpiccoli@igalia.com \
    --cc=guoren@kernel.org \
    --cc=jgross@suse.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-csky@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-parisc@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux@armlinux.org.uk \
    --cc=lucjan.lucjanov@gmail.com \
    --cc=mark.rutland@arm.com \
    --cc=mikelley@microsoft.com \
    --cc=oleksandr@natalenko.name \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=paulmck@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=pmenzel@molgen.mpg.de \
    --cc=sabrapan@amazon.com \
    --cc=seanjc@google.com \
    --cc=thomas.lendacky@amd.com \
    --cc=tsbogend@alpha.franken.de \
    --cc=usama.arif@bytedance.com \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).