linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch 00/18] x86/apic: Support for IPI shorthands
@ 2019-07-03 10:54 Thomas Gleixner
  2019-07-03 10:54 ` [patch 01/18] x86/apic: Invoke perf_events_lapic_init() after enabling APIC Thomas Gleixner
                   ` (17 more replies)
  0 siblings, 18 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

The recent discussion about using HPET as NMI watchdog made me look into
IPI shorthand support. Also Nadav wanted to look into shorthands to speed
up certain TLB operations.

The support for IPI shorthands is rather limited right now and basically
got rendered useless by making it depend on CPU_HOTPLUG=n.

The reason for this is that shorthands are broadcasted and in case that not
all present CPUs have been brought up this might end up with a similar
effect as the dreaded MCE broadcast.

But this can be handled smarter than just preventing shorthands if CPU
hotplug is enabled. The kernel already deals with the MCE broadcast issue
for the 'nosmt' case. It brings up all present CPUs and then shuts down the
SMT siblings right away after they did the basic initialization and set
CR4.MCE.

The core CPU hotplug code keeps track of that information already, so it
can be used to decide whether IPI shorthands can be used safely or not.

If all present CPUs have been brought up at least once it's safe to switch
to IPI shorthand mode. The switch over is done with a static key and can be
prevented completely with the existing (so far 32bit only) command line
option.

As a offlined CPU still receives IPIs the offline code is changed to soft
disable the local APIC so the offline CPU will not be bothered by shorthand
based IPIs. In soft disabled state the APIC still handles NMI, INIT, SIPI
so onlining will work as before.

To support NMI based shorthand IPIs the NMI handler gets a new check right
at the beginning of the handler code which lets the handler ignore the NMI
on a offline CPU and not call through the whole spaghetti maze of NMI
handling.

Soft disabling the local APIC on the offlined CPU unearthed a KVM APIC
emulation issue which is only relevant for CPU0 hotplug testing. The fix is
in the KVM tree already, but there is no need to have this dependency here.
(0-day folks are aware of it).

The APIC setup function has also a few minor issues which are addressed in
this series as well.

Part of the series is also a consolidation of the APIC code which was
necessary to not spread all the shorthand implementation details to header
files etc.

It survived testing on a range of different machines including NMI
shorthand IPIs. Aside of the KVM APIC issue, which is only relevant in
combination with CPU0 hotplug testing, there are no known side effects.

The series is also available from git:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.x86/ipi

Thanks,

	tglx

8<-------------------
 arch/x86/include/asm/apic_flat_64.h     |    8 -
 arch/x86/include/asm/ipi.h              |  109 ----------------------
 arch/x86/kernel/apic/x2apic.h           |    9 -
 b/arch/x86/include/asm/apic.h           |    3 
 b/arch/x86/include/asm/bugs.h           |    2 
 b/arch/x86/include/asm/processor.h      |    2 
 b/arch/x86/kernel/apic/apic.c           |  155 ++++++++++++++++++++------------
 b/arch/x86/kernel/apic/apic_flat_64.c   |   41 +++-----
 b/arch/x86/kernel/apic/apic_noop.c      |   18 ---
 b/arch/x86/kernel/apic/apic_numachip.c  |    8 -
 b/arch/x86/kernel/apic/bigsmp_32.c      |    9 -
 b/arch/x86/kernel/apic/ipi.c            |  103 +++++++++++++++------
 b/arch/x86/kernel/apic/local.h          |   68 ++++++++++++++
 b/arch/x86/kernel/apic/probe_32.c       |   41 --------
 b/arch/x86/kernel/apic/probe_64.c       |   16 ---
 b/arch/x86/kernel/apic/x2apic_cluster.c |   26 +++--
 b/arch/x86/kernel/apic/x2apic_phys.c    |   29 +++--
 b/arch/x86/kernel/apic/x2apic_uv_x.c    |   28 -----
 b/arch/x86/kernel/cpu/bugs.c            |    2 
 b/arch/x86/kernel/cpu/common.c          |   11 ++
 b/arch/x86/kernel/nmi.c                 |    3 
 b/arch/x86/kernel/smpboot.c             |   13 ++
 b/include/linux/cpumask.h               |    2 
 b/kernel/cpu.c                          |   11 +-
 24 files changed, 354 insertions(+), 363 deletions(-)




^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 01/18] x86/apic: Invoke perf_events_lapic_init() after enabling APIC
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 02/18] x86/apic: Soft disable APIC before initializing it Thomas Gleixner
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

If the APIC is soft disabled then unmasking an LVT entry does not work and
the write is ignored. perf_events_lapic_init() tries to do so.

Move the invocation after the point where the APIC has been enabled.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/apic/apic.c |    5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -1517,7 +1517,6 @@ static void setup_local_APIC(void)
 	int logical_apicid, ldr_apicid;
 #endif
 
-
 	if (disable_apic) {
 		disable_ioapic_support();
 		return;
@@ -1532,8 +1531,6 @@ static void setup_local_APIC(void)
 		apic_write(APIC_ESR, 0);
 	}
 #endif
-	perf_events_lapic_init();
-
 	/*
 	 * Double-check whether this APIC is really registered.
 	 * This is meaningless in clustered apic mode, so we skip it.
@@ -1614,6 +1611,8 @@ static void setup_local_APIC(void)
 	value |= SPURIOUS_APIC_VECTOR;
 	apic_write(APIC_SPIV, value);
 
+	perf_events_lapic_init();
+
 	/*
 	 * Set up LVT0, LVT1:
 	 *



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 02/18] x86/apic: Soft disable APIC before initializing it
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
  2019-07-03 10:54 ` [patch 01/18] x86/apic: Invoke perf_events_lapic_init() after enabling APIC Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 03/18] x86/apic: Make apic_pending_intr_clear() more robust Thomas Gleixner
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

If the APIC was already enabled on entry of setup_local_APIC() then
disabling it soft via the SPIV register makes a lot of sense.

That masks all LVT entries and brings it into a well defined state.

Otherwise previously enabled LVTs which are not touched in the setup
function stay unmasked and might surprise the just booting kernel.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/apic/apic.c |    8 ++++++++
 1 file changed, 8 insertions(+)

--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -1493,6 +1493,14 @@ static void setup_local_APIC(void)
 		return;
 	}
 
+	/*
+	 * If this comes from kexec/kcrash the APIC might be enabled in
+	 * SPIV. Soft disable it before doing further initialization.
+	 */
+	value = apic_read(APIC_SPIV);
+	value &= ~APIC_SPIV_APIC_ENABLED;
+	apic_write(APIC_SPIV, value);
+
 #ifdef CONFIG_X86_32
 	/* Pound the ESR really hard over the head with a big hammer - mbligh */
 	if (lapic_is_integrated() && apic->disable_esr) {



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 03/18] x86/apic: Make apic_pending_intr_clear() more robust
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
  2019-07-03 10:54 ` [patch 01/18] x86/apic: Invoke perf_events_lapic_init() after enabling APIC Thomas Gleixner
  2019-07-03 10:54 ` [patch 02/18] x86/apic: Soft disable APIC before initializing it Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 04/18] x86/apic: Move IPI inlines into ipi.c Thomas Gleixner
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

In course of developing shorthand based IPI support issues with the
function which tries to clear eventually pending ISR bits in the local APIC
were observed.

  1) O-day testing triggered the WARN_ON() in apic_pending_intr_clear().

     This warning is emitted when the function fails to clear pending ISR
     bits or observes pending IRR bits which are not delivered to the CPU
     after the stale ISR bit(s) are ACK'ed.

     Unfortunately the function only emits a WARN_ON() and fails to dump
     the IRR/ISR content. That's useless for debugging.

     Feng added spot on debug printk's which revealed that the stale IRR
     bit belonged to the APIC timer interrupt vector, but adding ad hoc
     debug code does not help with sporadic failures in the field.

     Rework the loop so the full IRR/ISR contents are saved and on failure
     dumped.

  2) The loop termination logic is interesting at best.

     If the machine has no TSC or cpu_khz is not known yet it tries 1
     million times to ack stale IRR/ISR bits. What?

     With TSC it uses the TSC to calculate the loop termination. It takes a
     timestamp at entry and terminates the loop when:

     	  (rdtsc() - start_timestamp) >= (cpu_hkz << 10)

     That's roughly one second.

     Both methods are problematic. The APIC has 256 vectors, which means
     that in theory max. 256 IRR/ISR bits can be set. In practice this is
     impossible as the first 32 vectors are reserved and not affected and
     the chance that more than a few bits are set is close to zero.

     With the pure loop based approach the 1 million retries are complete
     overkill.

     With TSC this can terminate too early in a guest which is running on a
     heavily loaded host even with only a couple of IRR/ISR bits set. The
     reason is that after acknowledging the highest priority ISR bit,
     pending IRRs must get serviced first before the next round of
     acknowledge can take place as the APIC (real and virtualized) does not
     honour EOI without a preceeding interrupt on the CPU. And every APIC
     read/write takes a VMEXIT if the APIC is virtualized. While trying to
     reproduce the issue 0-day reported it was observed that the guest was
     scheduled out long enough under heavy load that it terminated after 8
     iterations.

     Make the loop terminate after 512 iterations. That's plenty enough
     in any case and does not take endless time to complete.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/apic/apic.c |  111 +++++++++++++++++++++++++-------------------
 1 file changed, 65 insertions(+), 46 deletions(-)

--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -1453,54 +1453,72 @@ static void lapic_setup_esr(void)
 			oldvalue, value);
 }
 
+#define APIC_IR_REGS		APIC_ISR_NR
+#define APIC_IR_BITS		(APIC_IR_REGS * 32)
+#define APIC_IR_MAPSIZE		(APIC_IR_BITS / BITS_PER_LONG)
+
+union apic_ir {
+	unsigned long	map[APIC_IR_MAPSIZE];
+	u32		regs[APIC_IR_REGS];
+};
+
+static bool apic_check_and_ack(union apic_ir *irr, union apic_ir *isr)
+{
+	int i, bit;
+
+	/* Read the IRRs */
+	for (i = 0; i < APIC_IR_REGS; i++)
+		irr->regs[i] = apic_read(APIC_IRR + i * 0x10);
+
+	/* Read the ISRs */
+	for (i = 0; i < APIC_IR_REGS; i++)
+		isr->regs[i] = apic_read(APIC_ISR + i * 0x10);
+
+	/*
+	 * If the ISR map is not empty. ACK the APIC and run another round
+	 * to verify whether a pending IRR has been unblocked and turned
+	 * into a ISR.
+	 */
+	if (!bitmap_empty(isr->map, APIC_IR_BITS)) {
+		/*
+		 * There can be multiple ISR bits set when a high priority
+		 * interrupt preempted a lower priority one. Issue an ACK
+		 * per set bit.
+		 */
+		for_each_set_bit(bit, isr->map, APIC_IR_BITS)
+			ack_APIC_irq();
+		return true;
+	}
+
+	return !bitmap_empty(irr->map, APIC_IR_BITS);
+}
+
+/*
+ * After a crash, we no longer service the interrupts and a pending
+ * interrupt from previous kernel might still have ISR bit set.
+ *
+ * Most probably by now the CPU has serviced that pending interrupt and it
+ * might not have done the ack_APIC_irq() because it thought, interrupt
+ * came from i8259 as ExtInt. LAPIC did not get EOI so it does not clear
+ * the ISR bit and cpu thinks it has already serivced the interrupt. Hence
+ * a vector might get locked. It was noticed for timer irq (vector
+ * 0x31). Issue an extra EOI to clear ISR.
+ *
+ * If there are pending IRR bits they turn into ISR bits after a higher
+ * priority ISR bit has been acked.
+ */
 static void apic_pending_intr_clear(void)
 {
-	long long max_loops = cpu_khz ? cpu_khz : 1000000;
-	unsigned long long tsc = 0, ntsc;
-	unsigned int queued;
-	unsigned long value;
-	int i, j, acked = 0;
-
-	if (boot_cpu_has(X86_FEATURE_TSC))
-		tsc = rdtsc();
-	/*
-	 * After a crash, we no longer service the interrupts and a pending
-	 * interrupt from previous kernel might still have ISR bit set.
-	 *
-	 * Most probably by now CPU has serviced that pending interrupt and
-	 * it might not have done the ack_APIC_irq() because it thought,
-	 * interrupt came from i8259 as ExtInt. LAPIC did not get EOI so it
-	 * does not clear the ISR bit and cpu thinks it has already serivced
-	 * the interrupt. Hence a vector might get locked. It was noticed
-	 * for timer irq (vector 0x31). Issue an extra EOI to clear ISR.
-	 */
-	do {
-		queued = 0;
-		for (i = APIC_ISR_NR - 1; i >= 0; i--)
-			queued |= apic_read(APIC_IRR + i*0x10);
-
-		for (i = APIC_ISR_NR - 1; i >= 0; i--) {
-			value = apic_read(APIC_ISR + i*0x10);
-			for_each_set_bit(j, &value, 32) {
-				ack_APIC_irq();
-				acked++;
-			}
-		}
-		if (acked > 256) {
-			pr_err("LAPIC pending interrupts after %d EOI\n", acked);
-			break;
-		}
-		if (queued) {
-			if (boot_cpu_has(X86_FEATURE_TSC) && cpu_khz) {
-				ntsc = rdtsc();
-				max_loops = (long long)cpu_khz << 10;
-				max_loops -= ntsc - tsc;
-			} else {
-				max_loops--;
-			}
-		}
-	} while (queued && max_loops > 0);
-	WARN_ON(max_loops <= 0);
+	union apic_ir irr, isr;
+	unsigned int i;
+
+	/* 512 loops are way oversized and give the APIC a chance to obey. */
+	for (i = 0; i < 512; i++) {
+		if (!apic_check_and_ack(&irr, &isr))
+			return;
+	}
+	/* Dump the IRR/ISR content if that failed */
+	pr_warn("APIC: Stale IRR: %256pb ISR: %256pb\n", irr.map, isr.map);
 }
 
 /**
@@ -1573,6 +1591,7 @@ static void setup_local_APIC(void)
 	value &= ~APIC_TPRI_MASK;
 	apic_write(APIC_TASKPRI, value);
 
+	/* Clear eventually stale ISR/IRR bits */
 	apic_pending_intr_clear();
 
 	/*



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 04/18] x86/apic: Move IPI inlines into ipi.c
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (2 preceding siblings ...)
  2019-07-03 10:54 ` [patch 03/18] x86/apic: Make apic_pending_intr_clear() more robust Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 05/18] x86/apic: Cleanup the include maze Thomas Gleixner
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

No point in having them in an header file.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/ipi.h |   19 -------------------
 arch/x86/kernel/apic/ipi.c |   16 +++++++++++++---
 2 files changed, 13 insertions(+), 22 deletions(-)

--- a/arch/x86/include/asm/ipi.h
+++ b/arch/x86/include/asm/ipi.h
@@ -71,27 +71,8 @@ extern void default_send_IPI_mask_sequen
 extern void default_send_IPI_mask_allbutself_phys(const struct cpumask *mask,
 							 int vector);
 
-/* Avoid include hell */
-#define NMI_VECTOR 0x02
-
 extern int no_broadcast;
 
-static inline void __default_local_send_IPI_allbutself(int vector)
-{
-	if (no_broadcast || vector == NMI_VECTOR)
-		apic->send_IPI_mask_allbutself(cpu_online_mask, vector);
-	else
-		__default_send_IPI_shortcut(APIC_DEST_ALLBUT, vector, apic->dest_logical);
-}
-
-static inline void __default_local_send_IPI_all(int vector)
-{
-	if (no_broadcast || vector == NMI_VECTOR)
-		apic->send_IPI_mask(cpu_online_mask, vector);
-	else
-		__default_send_IPI_shortcut(APIC_DEST_ALLINC, vector, apic->dest_logical);
-}
-
 #ifdef CONFIG_X86_32
 extern void default_send_IPI_mask_sequence_logical(const struct cpumask *mask,
 							 int vector);
--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -198,15 +198,25 @@ void default_send_IPI_allbutself(int vec
 	 * if there are no other CPUs in the system then we get an APIC send
 	 * error if we try to broadcast, thus avoid sending IPIs in this case.
 	 */
-	if (!(num_online_cpus() > 1))
+	if (num_online_cpus() < 2)
 		return;
 
-	__default_local_send_IPI_allbutself(vector);
+	if (no_broadcast || vector == NMI_VECTOR) {
+		apic->send_IPI_mask_allbutself(cpu_online_mask, vector);
+	} else {
+		__default_send_IPI_shortcut(APIC_DEST_ALLBUT, vector,
+					    apic->dest_logical);
+	}
 }
 
 void default_send_IPI_all(int vector)
 {
-	__default_local_send_IPI_all(vector);
+	if (no_broadcast || vector == NMI_VECTOR) {
+		apic->send_IPI_mask(cpu_online_mask, vector);
+	} else {
+		__default_send_IPI_shortcut(APIC_DEST_ALLINC, vector,
+					    apic->dest_logical);
+	}
 }
 
 void default_send_IPI_self(int vector)



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 05/18] x86/apic: Cleanup the include maze
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (3 preceding siblings ...)
  2019-07-03 10:54 ` [patch 04/18] x86/apic: Move IPI inlines into ipi.c Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 06/18] x86/apic: Move ipi header into apic directory Thomas Gleixner
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

All of these APIC files include the world and some more. Remove the
unneeded cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/apic/apic_flat_64.c   |   15 ++++-----------
 arch/x86/kernel/apic/apic_noop.c      |   18 +-----------------
 arch/x86/kernel/apic/apic_numachip.c  |    6 +++---
 arch/x86/kernel/apic/ipi.c            |   15 +--------------
 arch/x86/kernel/apic/probe_32.c       |   18 ++----------------
 arch/x86/kernel/apic/probe_64.c       |   11 -----------
 arch/x86/kernel/apic/x2apic_cluster.c |   14 ++++++--------
 arch/x86/kernel/apic/x2apic_phys.c    |    9 +++------
 arch/x86/kernel/apic/x2apic_uv_x.c    |   28 ++++------------------------
 9 files changed, 24 insertions(+), 110 deletions(-)

--- a/arch/x86/kernel/apic/apic_flat_64.c
+++ b/arch/x86/kernel/apic/apic_flat_64.c
@@ -8,21 +8,14 @@
  * Martin Bligh, Andi Kleen, James Bottomley, John Stultz, and
  * James Cleverdon.
  */
-#include <linux/acpi.h>
-#include <linux/errno.h>
-#include <linux/threads.h>
 #include <linux/cpumask.h>
-#include <linux/string.h>
-#include <linux/kernel.h>
-#include <linux/ctype.h>
-#include <linux/hardirq.h>
 #include <linux/export.h>
+#include <linux/acpi.h>
 
-#include <asm/smp.h>
-#include <asm/ipi.h>
-#include <asm/apic.h>
-#include <asm/apic_flat_64.h>
 #include <asm/jailhouse_para.h>
+#include <asm/apic_flat_64.h>
+#include <asm/apic.h>
+#include <asm/ipi.h>
 
 static struct apic apic_physflat;
 static struct apic apic_flat;
--- a/arch/x86/kernel/apic/apic_noop.c
+++ b/arch/x86/kernel/apic/apic_noop.c
@@ -9,25 +9,9 @@
  * to not uglify the caller's code and allow to call (some) apic routines
  * like self-ipi, etc...
  */
-
-#include <linux/threads.h>
 #include <linux/cpumask.h>
-#include <linux/string.h>
-#include <linux/kernel.h>
-#include <linux/ctype.h>
-#include <linux/errno.h>
-#include <asm/fixmap.h>
-#include <asm/mpspec.h>
-#include <asm/apicdef.h>
-#include <asm/apic.h>
-#include <asm/setup.h>
 
-#include <linux/smp.h>
-#include <asm/ipi.h>
-
-#include <linux/interrupt.h>
-#include <asm/acpi.h>
-#include <asm/e820/api.h>
+#include <asm/apic.h>
 
 static void noop_init_apic_ldr(void) { }
 static void noop_send_IPI(int cpu, int vector) { }
--- a/arch/x86/kernel/apic/apic_numachip.c
+++ b/arch/x86/kernel/apic/apic_numachip.c
@@ -10,15 +10,15 @@
  * Send feedback to <support@numascale.com>
  *
  */
-
+#include <linux/types.h>
 #include <linux/init.h>
 
 #include <asm/numachip/numachip.h>
 #include <asm/numachip/numachip_csr.h>
-#include <asm/ipi.h>
+
 #include <asm/apic_flat_64.h>
 #include <asm/pgtable.h>
-#include <asm/pci_x86.h>
+#include <asm/ipi.h>
 
 u8 numachip_system __read_mostly;
 static const struct apic apic_numachip1;
--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -1,21 +1,8 @@
 // SPDX-License-Identifier: GPL-2.0
-#include <linux/cpumask.h>
-#include <linux/interrupt.h>
 
-#include <linux/mm.h>
-#include <linux/delay.h>
-#include <linux/spinlock.h>
-#include <linux/kernel_stat.h>
-#include <linux/mc146818rtc.h>
-#include <linux/cache.h>
-#include <linux/cpu.h>
+#include <linux/cpumask.h>
 
-#include <asm/smp.h>
-#include <asm/mtrr.h>
-#include <asm/tlbflush.h>
-#include <asm/mmu_context.h>
 #include <asm/apic.h>
-#include <asm/proto.h>
 #include <asm/ipi.h>
 
 void __default_send_IPI_shortcut(unsigned int shortcut, int vector, unsigned int dest)
--- a/arch/x86/kernel/apic/probe_32.c
+++ b/arch/x86/kernel/apic/probe_32.c
@@ -6,26 +6,12 @@
  *
  * Generic x86 APIC driver probe layer.
  */
-#include <linux/threads.h>
-#include <linux/cpumask.h>
 #include <linux/export.h>
-#include <linux/string.h>
-#include <linux/kernel.h>
-#include <linux/ctype.h>
-#include <linux/init.h>
 #include <linux/errno.h>
-#include <asm/fixmap.h>
-#include <asm/mpspec.h>
-#include <asm/apicdef.h>
-#include <asm/apic.h>
-#include <asm/setup.h>
-
-#include <linux/smp.h>
-#include <asm/ipi.h>
 
-#include <linux/interrupt.h>
+#include <asm/apic.h>
 #include <asm/acpi.h>
-#include <asm/e820/api.h>
+#include <asm/ipi.h>
 
 #ifdef CONFIG_HOTPLUG_CPU
 #define DEFAULT_SEND_IPI	(1)
--- a/arch/x86/kernel/apic/probe_64.c
+++ b/arch/x86/kernel/apic/probe_64.c
@@ -8,19 +8,8 @@
  * Martin Bligh, Andi Kleen, James Bottomley, John Stultz, and
  * James Cleverdon.
  */
-#include <linux/threads.h>
-#include <linux/cpumask.h>
-#include <linux/string.h>
-#include <linux/init.h>
-#include <linux/kernel.h>
-#include <linux/ctype.h>
-#include <linux/hardirq.h>
-#include <linux/dmar.h>
-
-#include <asm/smp.h>
 #include <asm/apic.h>
 #include <asm/ipi.h>
-#include <asm/setup.h>
 
 /*
  * Check the APIC IDs in bios_cpu_apicid and choose the APIC mode.
--- a/arch/x86/kernel/apic/x2apic_cluster.c
+++ b/arch/x86/kernel/apic/x2apic_cluster.c
@@ -1,14 +1,12 @@
 // SPDX-License-Identifier: GPL-2.0
-#include <linux/threads.h>
+
+#include <linux/cpuhotplug.h>
 #include <linux/cpumask.h>
-#include <linux/string.h>
-#include <linux/kernel.h>
-#include <linux/ctype.h>
-#include <linux/dmar.h>
-#include <linux/irq.h>
-#include <linux/cpu.h>
+#include <linux/slab.h>
+#include <linux/mm.h>
+
+#include <asm/apic.h>
 
-#include <asm/smp.h>
 #include "x2apic.h"
 
 struct cluster_mask {
--- a/arch/x86/kernel/apic/x2apic_phys.c
+++ b/arch/x86/kernel/apic/x2apic_phys.c
@@ -1,13 +1,10 @@
 // SPDX-License-Identifier: GPL-2.0
-#include <linux/threads.h>
+
 #include <linux/cpumask.h>
-#include <linux/string.h>
-#include <linux/kernel.h>
-#include <linux/ctype.h>
-#include <linux/dmar.h>
+#include <linux/acpi.h>
 
-#include <asm/smp.h>
 #include <asm/ipi.h>
+
 #include "x2apic.h"
 
 int x2apic_phys;
--- a/arch/x86/kernel/apic/x2apic_uv_x.c
+++ b/arch/x86/kernel/apic/x2apic_uv_x.c
@@ -7,40 +7,20 @@
  *
  * Copyright (C) 2007-2014 Silicon Graphics, Inc. All rights reserved.
  */
+#include <linux/crash_dump.h>
+#include <linux/cpuhotplug.h>
 #include <linux/cpumask.h>
-#include <linux/hardirq.h>
 #include <linux/proc_fs.h>
-#include <linux/threads.h>
-#include <linux/kernel.h>
+#include <linux/memory.h>
 #include <linux/export.h>
-#include <linux/string.h>
-#include <linux/ctype.h>
-#include <linux/sched.h>
-#include <linux/timer.h>
-#include <linux/slab.h>
-#include <linux/cpu.h>
-#include <linux/init.h>
-#include <linux/io.h>
 #include <linux/pci.h>
-#include <linux/kdebug.h>
-#include <linux/delay.h>
-#include <linux/crash_dump.h>
-#include <linux/reboot.h>
-#include <linux/memory.h>
-#include <linux/numa.h>
 
+#include <asm/e820/api.h>
 #include <asm/uv/uv_mmrs.h>
 #include <asm/uv/uv_hub.h>
-#include <asm/current.h>
-#include <asm/pgtable.h>
 #include <asm/uv/bios.h>
 #include <asm/uv/uv.h>
 #include <asm/apic.h>
-#include <asm/e820/api.h>
-#include <asm/ipi.h>
-#include <asm/smp.h>
-#include <asm/x86_init.h>
-#include <asm/nmi.h>
 
 DEFINE_PER_CPU(int, x2apic_extra_bits);
 



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 06/18] x86/apic: Move ipi header into apic directory
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (4 preceding siblings ...)
  2019-07-03 10:54 ` [patch 05/18] x86/apic: Cleanup the include maze Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 07/18] x86/apic: Move apic_flat_64 " Thomas Gleixner
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

Only used locally.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/ipi.h           |   90 -----------------------------------
 arch/x86/kernel/apic/apic_flat_64.c  |    3 -
 arch/x86/kernel/apic/apic_numachip.c |    3 -
 arch/x86/kernel/apic/bigsmp_32.c     |    9 ---
 arch/x86/kernel/apic/ipi.c           |    3 -
 arch/x86/kernel/apic/ipi.h           |   90 +++++++++++++++++++++++++++++++++++
 arch/x86/kernel/apic/probe_32.c      |    3 -
 arch/x86/kernel/apic/probe_64.c      |    3 -
 arch/x86/kernel/apic/x2apic_phys.c   |    3 -
 9 files changed, 103 insertions(+), 104 deletions(-)

--- a/arch/x86/include/asm/ipi.h
+++ /dev/null
@@ -1,90 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-#ifndef _ASM_X86_IPI_H
-#define _ASM_X86_IPI_H
-
-#ifdef CONFIG_X86_LOCAL_APIC
-
-/*
- * Copyright 2004 James Cleverdon, IBM.
- *
- * Generic APIC InterProcessor Interrupt code.
- *
- * Moved to include file by James Cleverdon from
- * arch/x86-64/kernel/smp.c
- *
- * Copyrights from kernel/smp.c:
- *
- * (c) 1995 Alan Cox, Building #3 <alan@redhat.com>
- * (c) 1998-99, 2000 Ingo Molnar <mingo@redhat.com>
- * (c) 2002,2003 Andi Kleen, SuSE Labs.
- */
-
-#include <asm/hw_irq.h>
-#include <asm/apic.h>
-#include <asm/smp.h>
-
-/*
- * the following functions deal with sending IPIs between CPUs.
- *
- * We use 'broadcast', CPU->CPU IPIs and self-IPIs too.
- */
-
-static inline unsigned int __prepare_ICR(unsigned int shortcut, int vector,
-					 unsigned int dest)
-{
-	unsigned int icr = shortcut | dest;
-
-	switch (vector) {
-	default:
-		icr |= APIC_DM_FIXED | vector;
-		break;
-	case NMI_VECTOR:
-		icr |= APIC_DM_NMI;
-		break;
-	}
-	return icr;
-}
-
-static inline int __prepare_ICR2(unsigned int mask)
-{
-	return SET_APIC_DEST_FIELD(mask);
-}
-
-static inline void __xapic_wait_icr_idle(void)
-{
-	while (native_apic_mem_read(APIC_ICR) & APIC_ICR_BUSY)
-		cpu_relax();
-}
-
-void __default_send_IPI_shortcut(unsigned int shortcut, int vector, unsigned int dest);
-
-/*
- * This is used to send an IPI with no shorthand notation (the destination is
- * specified in bits 56 to 63 of the ICR).
- */
-void __default_send_IPI_dest_field(unsigned int mask, int vector, unsigned int dest);
-
-extern void default_send_IPI_single(int cpu, int vector);
-extern void default_send_IPI_single_phys(int cpu, int vector);
-extern void default_send_IPI_mask_sequence_phys(const struct cpumask *mask,
-						 int vector);
-extern void default_send_IPI_mask_allbutself_phys(const struct cpumask *mask,
-							 int vector);
-
-extern int no_broadcast;
-
-#ifdef CONFIG_X86_32
-extern void default_send_IPI_mask_sequence_logical(const struct cpumask *mask,
-							 int vector);
-extern void default_send_IPI_mask_allbutself_logical(const struct cpumask *mask,
-							 int vector);
-extern void default_send_IPI_mask_logical(const struct cpumask *mask,
-						 int vector);
-extern void default_send_IPI_allbutself(int vector);
-extern void default_send_IPI_all(int vector);
-extern void default_send_IPI_self(int vector);
-#endif
-
-#endif
-
-#endif /* _ASM_X86_IPI_H */
--- a/arch/x86/kernel/apic/apic_flat_64.c
+++ b/arch/x86/kernel/apic/apic_flat_64.c
@@ -15,7 +15,8 @@
 #include <asm/jailhouse_para.h>
 #include <asm/apic_flat_64.h>
 #include <asm/apic.h>
-#include <asm/ipi.h>
+
+#include "ipi.h"
 
 static struct apic apic_physflat;
 static struct apic apic_flat;
--- a/arch/x86/kernel/apic/apic_numachip.c
+++ b/arch/x86/kernel/apic/apic_numachip.c
@@ -18,7 +18,8 @@
 
 #include <asm/apic_flat_64.h>
 #include <asm/pgtable.h>
-#include <asm/ipi.h>
+
+#include "ipi.h"
 
 u8 numachip_system __read_mostly;
 static const struct apic apic_numachip1;
--- a/arch/x86/kernel/apic/bigsmp_32.c
+++ b/arch/x86/kernel/apic/bigsmp_32.c
@@ -4,18 +4,13 @@
  *
  * Drives the local APIC in "clustered mode".
  */
-#include <linux/threads.h>
 #include <linux/cpumask.h>
-#include <linux/kernel.h>
-#include <linux/init.h>
 #include <linux/dmi.h>
 #include <linux/smp.h>
 
-#include <asm/apicdef.h>
-#include <asm/fixmap.h>
-#include <asm/mpspec.h>
 #include <asm/apic.h>
-#include <asm/ipi.h>
+
+#include "ipi.h"
 
 static unsigned bigsmp_get_apic_id(unsigned long x)
 {
--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -3,7 +3,8 @@
 #include <linux/cpumask.h>
 
 #include <asm/apic.h>
-#include <asm/ipi.h>
+
+#include "ipi.h"
 
 void __default_send_IPI_shortcut(unsigned int shortcut, int vector, unsigned int dest)
 {
--- /dev/null
+++ b/arch/x86/kernel/apic/ipi.h
@@ -0,0 +1,90 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _ASM_X86_IPI_H
+#define _ASM_X86_IPI_H
+
+#ifdef CONFIG_X86_LOCAL_APIC
+
+/*
+ * Copyright 2004 James Cleverdon, IBM.
+ *
+ * Generic APIC InterProcessor Interrupt code.
+ *
+ * Moved to include file by James Cleverdon from
+ * arch/x86-64/kernel/smp.c
+ *
+ * Copyrights from kernel/smp.c:
+ *
+ * (c) 1995 Alan Cox, Building #3 <alan@redhat.com>
+ * (c) 1998-99, 2000 Ingo Molnar <mingo@redhat.com>
+ * (c) 2002,2003 Andi Kleen, SuSE Labs.
+ */
+
+#include <asm/hw_irq.h>
+#include <asm/apic.h>
+#include <asm/smp.h>
+
+/*
+ * the following functions deal with sending IPIs between CPUs.
+ *
+ * We use 'broadcast', CPU->CPU IPIs and self-IPIs too.
+ */
+
+static inline unsigned int __prepare_ICR(unsigned int shortcut, int vector,
+					 unsigned int dest)
+{
+	unsigned int icr = shortcut | dest;
+
+	switch (vector) {
+	default:
+		icr |= APIC_DM_FIXED | vector;
+		break;
+	case NMI_VECTOR:
+		icr |= APIC_DM_NMI;
+		break;
+	}
+	return icr;
+}
+
+static inline int __prepare_ICR2(unsigned int mask)
+{
+	return SET_APIC_DEST_FIELD(mask);
+}
+
+static inline void __xapic_wait_icr_idle(void)
+{
+	while (native_apic_mem_read(APIC_ICR) & APIC_ICR_BUSY)
+		cpu_relax();
+}
+
+void __default_send_IPI_shortcut(unsigned int shortcut, int vector, unsigned int dest);
+
+/*
+ * This is used to send an IPI with no shorthand notation (the destination is
+ * specified in bits 56 to 63 of the ICR).
+ */
+void __default_send_IPI_dest_field(unsigned int mask, int vector, unsigned int dest);
+
+extern void default_send_IPI_single(int cpu, int vector);
+extern void default_send_IPI_single_phys(int cpu, int vector);
+extern void default_send_IPI_mask_sequence_phys(const struct cpumask *mask,
+						 int vector);
+extern void default_send_IPI_mask_allbutself_phys(const struct cpumask *mask,
+							 int vector);
+
+extern int no_broadcast;
+
+#ifdef CONFIG_X86_32
+extern void default_send_IPI_mask_sequence_logical(const struct cpumask *mask,
+							 int vector);
+extern void default_send_IPI_mask_allbutself_logical(const struct cpumask *mask,
+							 int vector);
+extern void default_send_IPI_mask_logical(const struct cpumask *mask,
+						 int vector);
+extern void default_send_IPI_allbutself(int vector);
+extern void default_send_IPI_all(int vector);
+extern void default_send_IPI_self(int vector);
+#endif
+
+#endif
+
+#endif /* _ASM_X86_IPI_H */
--- a/arch/x86/kernel/apic/probe_32.c
+++ b/arch/x86/kernel/apic/probe_32.c
@@ -11,7 +11,8 @@
 
 #include <asm/apic.h>
 #include <asm/acpi.h>
-#include <asm/ipi.h>
+
+#include "ipi.h"
 
 #ifdef CONFIG_HOTPLUG_CPU
 #define DEFAULT_SEND_IPI	(1)
--- a/arch/x86/kernel/apic/probe_64.c
+++ b/arch/x86/kernel/apic/probe_64.c
@@ -9,7 +9,8 @@
  * James Cleverdon.
  */
 #include <asm/apic.h>
-#include <asm/ipi.h>
+
+#include "ipi.h"
 
 /*
  * Check the APIC IDs in bios_cpu_apicid and choose the APIC mode.
--- a/arch/x86/kernel/apic/x2apic_phys.c
+++ b/arch/x86/kernel/apic/x2apic_phys.c
@@ -3,9 +3,8 @@
 #include <linux/cpumask.h>
 #include <linux/acpi.h>
 
-#include <asm/ipi.h>
-
 #include "x2apic.h"
+#include "ipi.h"
 
 int x2apic_phys;
 



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 07/18] x86/apic: Move apic_flat_64 header into apic directory
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (5 preceding siblings ...)
  2019-07-03 10:54 ` [patch 06/18] x86/apic: Move ipi header into apic directory Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 08/18] x86/apic: Consolidate the apic local headers Thomas Gleixner
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

Only used locally.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/apic_flat_64.h  |    8 --------
 arch/x86/kernel/apic/apic_flat_64.c  |    2 +-
 arch/x86/kernel/apic/apic_flat_64.h  |    8 ++++++++
 arch/x86/kernel/apic/apic_numachip.c |    2 +-
 4 files changed, 10 insertions(+), 10 deletions(-)

--- a/arch/x86/include/asm/apic_flat_64.h
+++ /dev/null
@@ -1,8 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _ASM_X86_APIC_FLAT_64_H
-#define _ASM_X86_APIC_FLAT_64_H
-
-extern void flat_init_apic_ldr(void);
-
-#endif
-
--- a/arch/x86/kernel/apic/apic_flat_64.c
+++ b/arch/x86/kernel/apic/apic_flat_64.c
@@ -13,9 +13,9 @@
 #include <linux/acpi.h>
 
 #include <asm/jailhouse_para.h>
-#include <asm/apic_flat_64.h>
 #include <asm/apic.h>
 
+#include "apic_flat_64.h"
 #include "ipi.h"
 
 static struct apic apic_physflat;
--- /dev/null
+++ b/arch/x86/kernel/apic/apic_flat_64.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_APIC_FLAT_64_H
+#define _ASM_X86_APIC_FLAT_64_H
+
+extern void flat_init_apic_ldr(void);
+
+#endif
+
--- a/arch/x86/kernel/apic/apic_numachip.c
+++ b/arch/x86/kernel/apic/apic_numachip.c
@@ -16,9 +16,9 @@
 #include <asm/numachip/numachip.h>
 #include <asm/numachip/numachip_csr.h>
 
-#include <asm/apic_flat_64.h>
 #include <asm/pgtable.h>
 
+#include "apic_flat_64.h"
 #include "ipi.h"
 
 u8 numachip_system __read_mostly;



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 08/18] x86/apic: Consolidate the apic local headers
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (6 preceding siblings ...)
  2019-07-03 10:54 ` [patch 07/18] x86/apic: Move apic_flat_64 " Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 09/18] smp/hotplug: Track booted once CPUs in a cpumask Thomas Gleixner
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

Now there are three small local headers. Some contain functions which are
only used in one source file.

Move all the inlines and declarations into a single local header and the
inlines which are only used in one source file into that.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/apic/apic_flat_64.c   |    3 -
 arch/x86/kernel/apic/apic_flat_64.h   |    8 ---
 arch/x86/kernel/apic/apic_numachip.c  |    3 -
 arch/x86/kernel/apic/bigsmp_32.c      |    2 
 arch/x86/kernel/apic/ipi.c            |   14 ++++-
 arch/x86/kernel/apic/ipi.h            |   90 ----------------------------------
 arch/x86/kernel/apic/local.h          |   63 +++++++++++++++++++++++
 arch/x86/kernel/apic/probe_32.c       |    3 -
 arch/x86/kernel/apic/probe_64.c       |    2 
 arch/x86/kernel/apic/x2apic.h         |    9 ---
 arch/x86/kernel/apic/x2apic_cluster.c |    2 
 arch/x86/kernel/apic/x2apic_phys.c    |    3 -
 12 files changed, 83 insertions(+), 119 deletions(-)

--- a/arch/x86/kernel/apic/apic_flat_64.c
+++ b/arch/x86/kernel/apic/apic_flat_64.c
@@ -15,8 +15,7 @@
 #include <asm/jailhouse_para.h>
 #include <asm/apic.h>
 
-#include "apic_flat_64.h"
-#include "ipi.h"
+#include "local.h"
 
 static struct apic apic_physflat;
 static struct apic apic_flat;
--- a/arch/x86/kernel/apic/apic_flat_64.h
+++ /dev/null
@@ -1,8 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _ASM_X86_APIC_FLAT_64_H
-#define _ASM_X86_APIC_FLAT_64_H
-
-extern void flat_init_apic_ldr(void);
-
-#endif
-
--- a/arch/x86/kernel/apic/apic_numachip.c
+++ b/arch/x86/kernel/apic/apic_numachip.c
@@ -18,8 +18,7 @@
 
 #include <asm/pgtable.h>
 
-#include "apic_flat_64.h"
-#include "ipi.h"
+#include "local.h"
 
 u8 numachip_system __read_mostly;
 static const struct apic apic_numachip1;
--- a/arch/x86/kernel/apic/bigsmp_32.c
+++ b/arch/x86/kernel/apic/bigsmp_32.c
@@ -10,7 +10,7 @@
 
 #include <asm/apic.h>
 
-#include "ipi.h"
+#include "local.h"
 
 static unsigned bigsmp_get_apic_id(unsigned long x)
 {
--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -1,10 +1,20 @@
 // SPDX-License-Identifier: GPL-2.0
 
 #include <linux/cpumask.h>
+#include <linux/smp.h>
 
-#include <asm/apic.h>
+#include "local.h"
 
-#include "ipi.h"
+static inline int __prepare_ICR2(unsigned int mask)
+{
+	return SET_APIC_DEST_FIELD(mask);
+}
+
+static inline void __xapic_wait_icr_idle(void)
+{
+	while (native_apic_mem_read(APIC_ICR) & APIC_ICR_BUSY)
+		cpu_relax();
+}
 
 void __default_send_IPI_shortcut(unsigned int shortcut, int vector, unsigned int dest)
 {
--- a/arch/x86/kernel/apic/ipi.h
+++ /dev/null
@@ -1,90 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-#ifndef _ASM_X86_IPI_H
-#define _ASM_X86_IPI_H
-
-#ifdef CONFIG_X86_LOCAL_APIC
-
-/*
- * Copyright 2004 James Cleverdon, IBM.
- *
- * Generic APIC InterProcessor Interrupt code.
- *
- * Moved to include file by James Cleverdon from
- * arch/x86-64/kernel/smp.c
- *
- * Copyrights from kernel/smp.c:
- *
- * (c) 1995 Alan Cox, Building #3 <alan@redhat.com>
- * (c) 1998-99, 2000 Ingo Molnar <mingo@redhat.com>
- * (c) 2002,2003 Andi Kleen, SuSE Labs.
- */
-
-#include <asm/hw_irq.h>
-#include <asm/apic.h>
-#include <asm/smp.h>
-
-/*
- * the following functions deal with sending IPIs between CPUs.
- *
- * We use 'broadcast', CPU->CPU IPIs and self-IPIs too.
- */
-
-static inline unsigned int __prepare_ICR(unsigned int shortcut, int vector,
-					 unsigned int dest)
-{
-	unsigned int icr = shortcut | dest;
-
-	switch (vector) {
-	default:
-		icr |= APIC_DM_FIXED | vector;
-		break;
-	case NMI_VECTOR:
-		icr |= APIC_DM_NMI;
-		break;
-	}
-	return icr;
-}
-
-static inline int __prepare_ICR2(unsigned int mask)
-{
-	return SET_APIC_DEST_FIELD(mask);
-}
-
-static inline void __xapic_wait_icr_idle(void)
-{
-	while (native_apic_mem_read(APIC_ICR) & APIC_ICR_BUSY)
-		cpu_relax();
-}
-
-void __default_send_IPI_shortcut(unsigned int shortcut, int vector, unsigned int dest);
-
-/*
- * This is used to send an IPI with no shorthand notation (the destination is
- * specified in bits 56 to 63 of the ICR).
- */
-void __default_send_IPI_dest_field(unsigned int mask, int vector, unsigned int dest);
-
-extern void default_send_IPI_single(int cpu, int vector);
-extern void default_send_IPI_single_phys(int cpu, int vector);
-extern void default_send_IPI_mask_sequence_phys(const struct cpumask *mask,
-						 int vector);
-extern void default_send_IPI_mask_allbutself_phys(const struct cpumask *mask,
-							 int vector);
-
-extern int no_broadcast;
-
-#ifdef CONFIG_X86_32
-extern void default_send_IPI_mask_sequence_logical(const struct cpumask *mask,
-							 int vector);
-extern void default_send_IPI_mask_allbutself_logical(const struct cpumask *mask,
-							 int vector);
-extern void default_send_IPI_mask_logical(const struct cpumask *mask,
-						 int vector);
-extern void default_send_IPI_allbutself(int vector);
-extern void default_send_IPI_all(int vector);
-extern void default_send_IPI_self(int vector);
-#endif
-
-#endif
-
-#endif /* _ASM_X86_IPI_H */
--- /dev/null
+++ b/arch/x86/kernel/apic/local.h
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Historical copyright notices:
+ *
+ * Copyright 2004 James Cleverdon, IBM.
+ * (c) 1995 Alan Cox, Building #3 <alan@redhat.com>
+ * (c) 1998-99, 2000 Ingo Molnar <mingo@redhat.com>
+ * (c) 2002,2003 Andi Kleen, SuSE Labs.
+ */
+#include <asm/apic.h>
+
+/* APIC flat 64 */
+void flat_init_apic_ldr(void);
+
+/* X2APIC */
+int x2apic_apic_id_valid(u32 apicid);
+int x2apic_apic_id_registered(void);
+void __x2apic_send_IPI_dest(unsigned int apicid, int vector, unsigned int dest);
+unsigned int x2apic_get_apic_id(unsigned long id);
+u32 x2apic_set_apic_id(unsigned int id);
+int x2apic_phys_pkg_id(int initial_apicid, int index_msb);
+void x2apic_send_IPI_self(int vector);
+
+/* IPI */
+static inline unsigned int __prepare_ICR(unsigned int shortcut, int vector,
+					 unsigned int dest)
+{
+	unsigned int icr = shortcut | dest;
+
+	switch (vector) {
+	default:
+		icr |= APIC_DM_FIXED | vector;
+		break;
+	case NMI_VECTOR:
+		icr |= APIC_DM_NMI;
+		break;
+	}
+	return icr;
+}
+
+void __default_send_IPI_shortcut(unsigned int shortcut, int vector, unsigned int dest);
+
+/*
+ * This is used to send an IPI with no shorthand notation (the destination is
+ * specified in bits 56 to 63 of the ICR).
+ */
+void __default_send_IPI_dest_field(unsigned int mask, int vector, unsigned int dest);
+
+void default_send_IPI_single(int cpu, int vector);
+void default_send_IPI_single_phys(int cpu, int vector);
+void default_send_IPI_mask_sequence_phys(const struct cpumask *mask, int vector);
+void default_send_IPI_mask_allbutself_phys(const struct cpumask *mask, int vector);
+
+extern int no_broadcast;
+
+#ifdef CONFIG_X86_32
+void default_send_IPI_mask_sequence_logical(const struct cpumask *mask, int vector);
+void default_send_IPI_mask_allbutself_logical(const struct cpumask *mask, int vector);
+void default_send_IPI_mask_logical(const struct cpumask *mask, int vector);
+void default_send_IPI_allbutself(int vector);
+void default_send_IPI_all(int vector);
+void default_send_IPI_self(int vector);
+#endif
--- a/arch/x86/kernel/apic/probe_32.c
+++ b/arch/x86/kernel/apic/probe_32.c
@@ -8,11 +8,12 @@
  */
 #include <linux/export.h>
 #include <linux/errno.h>
+#include <linux/smp.h>
 
 #include <asm/apic.h>
 #include <asm/acpi.h>
 
-#include "ipi.h"
+#include "local.h"
 
 #ifdef CONFIG_HOTPLUG_CPU
 #define DEFAULT_SEND_IPI	(1)
--- a/arch/x86/kernel/apic/probe_64.c
+++ b/arch/x86/kernel/apic/probe_64.c
@@ -10,7 +10,7 @@
  */
 #include <asm/apic.h>
 
-#include "ipi.h"
+#include "local.h"
 
 /*
  * Check the APIC IDs in bios_cpu_apicid and choose the APIC mode.
--- a/arch/x86/kernel/apic/x2apic.h
+++ /dev/null
@@ -1,9 +0,0 @@
-/* Common bits for X2APIC cluster/physical modes. */
-
-int x2apic_apic_id_valid(u32 apicid);
-int x2apic_apic_id_registered(void);
-void __x2apic_send_IPI_dest(unsigned int apicid, int vector, unsigned int dest);
-unsigned int x2apic_get_apic_id(unsigned long id);
-u32 x2apic_set_apic_id(unsigned int id);
-int x2apic_phys_pkg_id(int initial_apicid, int index_msb);
-void x2apic_send_IPI_self(int vector);
--- a/arch/x86/kernel/apic/x2apic_cluster.c
+++ b/arch/x86/kernel/apic/x2apic_cluster.c
@@ -7,7 +7,7 @@
 
 #include <asm/apic.h>
 
-#include "x2apic.h"
+#include "local.h"
 
 struct cluster_mask {
 	unsigned int	clusterid;
--- a/arch/x86/kernel/apic/x2apic_phys.c
+++ b/arch/x86/kernel/apic/x2apic_phys.c
@@ -3,8 +3,7 @@
 #include <linux/cpumask.h>
 #include <linux/acpi.h>
 
-#include "x2apic.h"
-#include "ipi.h"
+#include "local.h"
 
 int x2apic_phys;
 



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 09/18] smp/hotplug: Track booted once CPUs in a cpumask
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (7 preceding siblings ...)
  2019-07-03 10:54 ` [patch 08/18] x86/apic: Consolidate the apic local headers Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 10/18] x86/cpu: Move arch_smt_update() to a neutral place Thomas Gleixner
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

The booted once information which is required to deal with the MCE
broadcast issue on X86 correctly is stored in the per cpu hotplug state,
which is perfectly fine for the intended purpose.

X86 needs that information for supporting NMI broadcasting via shortcuts,
but retrieving it from per cpu data is cumbersome.

Move it to a cpumask so the information can be checked against the
cpu_present_mask quickly.

No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/cpumask.h |    2 ++
 kernel/cpu.c            |   11 +++++++----
 2 files changed, 9 insertions(+), 4 deletions(-)

--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -115,6 +115,8 @@ extern struct cpumask __cpu_active_mask;
 #define cpu_active(cpu)		((cpu) == 0)
 #endif
 
+extern cpumask_t cpus_booted_once_mask;
+
 static inline void cpu_max_bits_warn(unsigned int cpu, unsigned int bits)
 {
 #ifdef CONFIG_DEBUG_PER_CPU_MAPS
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -62,7 +62,6 @@ struct cpuhp_cpu_state {
 	bool			rollback;
 	bool			single;
 	bool			bringup;
-	bool			booted_once;
 	struct hlist_node	*node;
 	struct hlist_node	*last;
 	enum cpuhp_state	cb_state;
@@ -76,6 +75,10 @@ static DEFINE_PER_CPU(struct cpuhp_cpu_s
 	.fail = CPUHP_INVALID,
 };
 
+#ifdef CONFIG_SMP
+cpumask_t cpus_booted_once_mask;
+#endif
+
 #if defined(CONFIG_LOCKDEP) && defined(CONFIG_SMP)
 static struct lockdep_map cpuhp_state_up_map =
 	STATIC_LOCKDEP_MAP_INIT("cpuhp_state-up", &cpuhp_state_up_map);
@@ -433,7 +436,7 @@ static inline bool cpu_smt_allowed(unsig
 	 * CPU. Otherwise, a broadacasted MCE observing CR4.MCE=0b on any
 	 * core will shutdown the machine.
 	 */
-	return !per_cpu(cpuhp_state, cpu).booted_once;
+	return !cpumask_test_cpu(cpu, &cpus_booted_once_mask);
 }
 #else
 static inline bool cpu_smt_allowed(unsigned int cpu) { return true; }
@@ -1066,7 +1069,7 @@ void notify_cpu_starting(unsigned int cp
 	int ret;
 
 	rcu_cpu_starting(cpu);	/* Enables RCU usage on this CPU. */
-	st->booted_once = true;
+	cpumask_set_cpu(cpu, &cpus_booted_once_mask);
 	while (st->state < target) {
 		st->state++;
 		ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
@@ -2324,7 +2327,7 @@ void __init boot_cpu_init(void)
 void __init boot_cpu_hotplug_init(void)
 {
 #ifdef CONFIG_SMP
-	this_cpu_write(cpuhp_state.booted_once, true);
+	cpumask_set_cpu(smp_processor_id(), &cpus_booted_once_mask);
 #endif
 	this_cpu_write(cpuhp_state.state, CPUHP_ONLINE);
 }



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 10/18] x86/cpu: Move arch_smt_update() to a neutral place
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (8 preceding siblings ...)
  2019-07-03 10:54 ` [patch 09/18] smp/hotplug: Track booted once CPUs in a cpumask Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 11/18] x86/hotplug: Silence APIC and NMI when CPU is dead Thomas Gleixner
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

arch_smt_update() will be used to control IPI/NMI broadcasting via the
shorthand mechanism. Keeping it in the bugs file and calling the apic
function from there is possible, but not really intuitive.

Move it to a neutral place and invoke the bugs function from there.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/bugs.h  |    2 ++
 arch/x86/kernel/cpu/bugs.c   |    2 +-
 arch/x86/kernel/cpu/common.c |    9 +++++++++
 3 files changed, 12 insertions(+), 1 deletion(-)

--- a/arch/x86/include/asm/bugs.h
+++ b/arch/x86/include/asm/bugs.h
@@ -18,4 +18,6 @@ int ppro_with_ram_bug(void);
 static inline int ppro_with_ram_bug(void) { return 0; }
 #endif
 
+extern void cpu_bugs_smt_update(void);
+
 #endif /* _ASM_X86_BUGS_H */
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -700,7 +700,7 @@ static void update_mds_branch_idle(void)
 
 #define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
 
-void arch_smt_update(void)
+void cpu_bugs_smt_update(void)
 {
 	/* Enhanced IBRS implies STIBP. No update required. */
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1927,3 +1927,12 @@ void microcode_check(void)
 	pr_warn("x86/CPU: CPU features have changed after loading microcode, but might not take effect.\n");
 	pr_warn("x86/CPU: Please consider either early loading through initrd/built-in or a potential BIOS update.\n");
 }
+
+/*
+ * Invoked from core CPU hotplug code after hotplug operations
+ */
+void arch_smt_update(void)
+{
+	/* Handle the speculative execution misfeatures */
+	cpu_bugs_smt_update();
+}



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 11/18] x86/hotplug: Silence APIC and NMI when CPU is dead
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (9 preceding siblings ...)
  2019-07-03 10:54 ` [patch 10/18] x86/cpu: Move arch_smt_update() to a neutral place Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 12/18] x86/apic: Remove dest argument from __default_send_IPI_shortcut() Thomas Gleixner
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

In order to support IPI/NMI broadcasting via the shorthand mechanism side
effects of shorthands need to be mitigated:

 Shorthand IPIs and NMIs hit all CPUs including unplugged CPUs

Neither of those can be handled on unplugged CPUs for obvious reasons.

It would be trivial to just fully disable the APIC via the enable bit in
MSR_APICBASE. But that's not possible because clearing that bit on systems
based on the 3 wire APIC bus would require a hardware reset to bring it
back as the APIC would lose track of bus arbitration. On systems with FSB
delivery APICBASE could be disabled, but it has to be guaranteed that no
interrupt is sent to the APIC while in that state and it's not clear from
the SDM whether it still responds to INIT/SIPI messages.

Therefore stay on the safe side and switch the APIC into soft disabled mode
so it won't deliver any regular vector to the CPU.

NMIs are still propagated to the 'dead' CPUs. To mitigate that add a per
cpu variable which tells the NMI handler to ignore NMIs. Note, this cannot
use the stop/restart_nmi() magic which is used in the alternatives code. A
dead CPU cannot invoke nmi_enter() or anything else due to RCU and other
reasons.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/apic.h      |    1 +
 arch/x86/include/asm/processor.h |    2 ++
 arch/x86/kernel/apic/apic.c      |   35 ++++++++++++++++++++++++-----------
 arch/x86/kernel/nmi.c            |    3 +++
 arch/x86/kernel/smpboot.c        |   13 ++++++++++++-
 5 files changed, 42 insertions(+), 12 deletions(-)

--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -136,6 +136,7 @@ extern int lapic_get_maxlvt(void);
 extern void clear_local_APIC(void);
 extern void disconnect_bsp_APIC(int virt_wire_setup);
 extern void disable_local_APIC(void);
+extern void apic_soft_disable(void);
 extern void lapic_shutdown(void);
 extern void sync_Arb_IDs(void);
 extern void init_bsp_APIC(void);
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -428,6 +428,8 @@ DECLARE_PER_CPU_ALIGNED(struct stack_can
 DECLARE_PER_CPU(struct irq_stack *, softirq_stack_ptr);
 #endif	/* X86_64 */
 
+DECLARE_PER_CPU(bool, cpu_ignore_nmi);
+
 extern unsigned int fpu_kernel_xstate_size;
 extern unsigned int fpu_user_xstate_size;
 
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -1182,25 +1182,38 @@ void clear_local_APIC(void)
 }
 
 /**
- * disable_local_APIC - clear and disable the local APIC
+ * apic_soft_disable - Clears and software disables the local APIC on hotplug
+ *
+ * Contrary to disable_local_APIC() this does not touch the enable bit in
+ * MSR_IA32_APICBASE. Clearing that bit on systems based on the 3 wire APIC
+ * bus would require a hardware reset as the APIC would lose track of bus
+ * arbitration. On systems with FSB delivery APICBASE could be disabled,
+ * but it has to be guaranteed that no interrupt is sent to the APIC while
+ * in that state and it's not clear from the SDM whether it still responds
+ * to INIT/SIPI messages. Stay on the safe side and use software disable.
  */
-void disable_local_APIC(void)
+void apic_soft_disable(void)
 {
-	unsigned int value;
-
-	/* APIC hasn't been mapped yet */
-	if (!x2apic_mode && !apic_phys)
-		return;
+	u32 value;
 
 	clear_local_APIC();
 
-	/*
-	 * Disable APIC (implies clearing of registers
-	 * for 82489DX!).
-	 */
+	/* Soft disable APIC (implies clearing of registers for 82489DX!). */
 	value = apic_read(APIC_SPIV);
 	value &= ~APIC_SPIV_APIC_ENABLED;
 	apic_write(APIC_SPIV, value);
+}
+
+/**
+ * disable_local_APIC - clear and disable the local APIC
+ */
+void disable_local_APIC(void)
+{
+	/* APIC hasn't been mapped yet */
+	if (!x2apic_mode && !apic_phys)
+		return;
+
+	apic_soft_disable();
 
 #ifdef CONFIG_X86_32
 	/*
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -512,6 +512,9 @@ NOKPROBE_SYMBOL(is_debug_stack);
 dotraplinkage notrace void
 do_nmi(struct pt_regs *regs, long error_code)
 {
+	if (IS_ENABLED(CONFIG_SMP) && this_cpu_read(cpu_ignore_nmi))
+		return;
+
 	if (this_cpu_read(nmi_state) != NMI_NOT_RUNNING) {
 		this_cpu_write(nmi_state, NMI_LATCHED);
 		return;
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -81,6 +81,9 @@
 #include <asm/spec-ctrl.h>
 #include <asm/hw_irq.h>
 
+/* Flag for the NMI path telling it to ignore the NMI */
+DEFINE_PER_CPU(bool, cpu_ignore_nmi);
+
 /* representing HT siblings of each logical CPU */
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_map);
 EXPORT_PER_CPU_SYMBOL(cpu_sibling_map);
@@ -263,6 +266,8 @@ static void notrace start_secondary(void
 	unlock_vector_lock();
 	cpu_set_state_online(smp_processor_id());
 	x86_platform.nmi_init();
+	/* Reenable NMI handling */
+	this_cpu_write(cpu_ignore_nmi, false);
 
 	/* enable local interrupts */
 	local_irq_enable();
@@ -1599,6 +1604,7 @@ void cpu_disable_common(void)
 	unlock_vector_lock();
 	fixup_irqs();
 	lapic_offline();
+	this_cpu_write(cpu_ignore_nmi, true);
 }
 
 int native_cpu_disable(void)
@@ -1609,7 +1615,12 @@ int native_cpu_disable(void)
 	if (ret)
 		return ret;
 
-	clear_local_APIC();
+	/*
+	 * Disable the local APIC. Otherwise IPI broadcasts will reach
+	 * it. It still responds normally to INIT, NMI, SMI, and SIPI
+	 * messages.
+	 */
+	apic_soft_disable();
 	cpu_disable_common();
 
 	return 0;



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 12/18] x86/apic: Remove dest argument from __default_send_IPI_shortcut()
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (10 preceding siblings ...)
  2019-07-03 10:54 ` [patch 11/18] x86/hotplug: Silence APIC and NMI when CPU is dead Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 13/18] x86/apic: Add NMI_VECTOR wait to IPI shorthand Thomas Gleixner
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

The SDM states:

  "The destination shorthand field of the ICR allows the delivery mode to be
   by-passed in favor of broadcasting the IPI to all the processors on the
   system bus and/or back to itself (see Section 10.6.1, Interrupt Command
   Register (ICR)). Three destination shorthands are supported: self, all
   excluding self, and all including self. The destination mode is ignored
   when a destination shorthand is used."

So there is no point to supply the destination mode to the shorthand
delivery function.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/apic/apic_flat_64.c |    6 ++----
 arch/x86/kernel/apic/ipi.c          |   15 +++++++--------
 arch/x86/kernel/apic/local.h        |    2 +-
 arch/x86/kernel/apic/probe_64.c     |    2 +-
 4 files changed, 11 insertions(+), 14 deletions(-)

--- a/arch/x86/kernel/apic/apic_flat_64.c
+++ b/arch/x86/kernel/apic/apic_flat_64.c
@@ -90,8 +90,7 @@ static void flat_send_IPI_allbutself(int
 			_flat_send_IPI_mask(mask, vector);
 		}
 	} else if (num_online_cpus() > 1) {
-		__default_send_IPI_shortcut(APIC_DEST_ALLBUT,
-					    vector, apic->dest_logical);
+		__default_send_IPI_shortcut(APIC_DEST_ALLBUT, vector);
 	}
 }
 
@@ -100,8 +99,7 @@ static void flat_send_IPI_all(int vector
 	if (vector == NMI_VECTOR) {
 		flat_send_IPI_mask(cpu_online_mask, vector);
 	} else {
-		__default_send_IPI_shortcut(APIC_DEST_ALLINC,
-					    vector, apic->dest_logical);
+		__default_send_IPI_shortcut(APIC_DEST_ALLINC, vector);
 	}
 }
 
--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -16,7 +16,7 @@ static inline void __xapic_wait_icr_idle
 		cpu_relax();
 }
 
-void __default_send_IPI_shortcut(unsigned int shortcut, int vector, unsigned int dest)
+void __default_send_IPI_shortcut(unsigned int shortcut, int vector)
 {
 	/*
 	 * Subtle. In the case of the 'never do double writes' workaround
@@ -33,9 +33,10 @@ void __default_send_IPI_shortcut(unsigne
 	__xapic_wait_icr_idle();
 
 	/*
-	 * No need to touch the target chip field
+	 * No need to touch the target chip field. Also the destination
+	 * mode is ignored when a shorthand is used.
 	 */
-	cfg = __prepare_ICR(shortcut, vector, dest);
+	cfg = __prepare_ICR(shortcut, vector, 0);
 
 	/*
 	 * Send the IPI. The write to APIC_ICR fires this off.
@@ -202,8 +203,7 @@ void default_send_IPI_allbutself(int vec
 	if (no_broadcast || vector == NMI_VECTOR) {
 		apic->send_IPI_mask_allbutself(cpu_online_mask, vector);
 	} else {
-		__default_send_IPI_shortcut(APIC_DEST_ALLBUT, vector,
-					    apic->dest_logical);
+		__default_send_IPI_shortcut(APIC_DEST_ALLBUT, vector);
 	}
 }
 
@@ -212,14 +212,13 @@ void default_send_IPI_all(int vector)
 	if (no_broadcast || vector == NMI_VECTOR) {
 		apic->send_IPI_mask(cpu_online_mask, vector);
 	} else {
-		__default_send_IPI_shortcut(APIC_DEST_ALLINC, vector,
-					    apic->dest_logical);
+		__default_send_IPI_shortcut(APIC_DEST_ALLINC, vector);
 	}
 }
 
 void default_send_IPI_self(int vector)
 {
-	__default_send_IPI_shortcut(APIC_DEST_SELF, vector, apic->dest_logical);
+	__default_send_IPI_shortcut(APIC_DEST_SELF, vector);
 }
 
 /* must come after the send_IPI functions above for inlining */
--- a/arch/x86/kernel/apic/local.h
+++ b/arch/x86/kernel/apic/local.h
@@ -38,7 +38,7 @@ static inline unsigned int __prepare_ICR
 	return icr;
 }
 
-void __default_send_IPI_shortcut(unsigned int shortcut, int vector, unsigned int dest);
+void __default_send_IPI_shortcut(unsigned int shortcut, int vector);
 
 /*
  * This is used to send an IPI with no shorthand notation (the destination is
--- a/arch/x86/kernel/apic/probe_64.c
+++ b/arch/x86/kernel/apic/probe_64.c
@@ -40,7 +40,7 @@ void __init default_setup_apic_routing(v
 
 void apic_send_IPI_self(int vector)
 {
-	__default_send_IPI_shortcut(APIC_DEST_SELF, vector, APIC_DEST_PHYSICAL);
+	__default_send_IPI_shortcut(APIC_DEST_SELF, vector);
 }
 
 int __init default_acpi_madt_oem_check(char *oem_id, char *oem_table_id)



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 13/18] x86/apic: Add NMI_VECTOR wait to IPI shorthand
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (11 preceding siblings ...)
  2019-07-03 10:54 ` [patch 12/18] x86/apic: Remove dest argument from __default_send_IPI_shortcut() Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 14/18] x86/apic: Move no_ipi_broadcast() out of 32bit Thomas Gleixner
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

To support NMI shorthand broadcasts add the safe wait for ICR idle for NMI
vector delivery.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/apic/ipi.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -30,7 +30,10 @@ void __default_send_IPI_shortcut(unsigne
 	/*
 	 * Wait for idle.
 	 */
-	__xapic_wait_icr_idle();
+	if (unlikely(vector == NMI_VECTOR))
+		safe_apic_wait_icr_idle();
+	else
+		__xapic_wait_icr_idle();
 
 	/*
 	 * No need to touch the target chip field. Also the destination



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 14/18] x86/apic: Move no_ipi_broadcast() out of 32bit
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (12 preceding siblings ...)
  2019-07-03 10:54 ` [patch 13/18] x86/apic: Add NMI_VECTOR wait to IPI shorthand Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 15/18] x86/apic: Add static key to Control IPI shorthands Thomas Gleixner
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

For the upcoming shorthand support for all APIC incarnations the command
line option needs to be available for 64 bit as well.

While at it, rename the control variable, make it static and mark it
__ro_after_init.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/apic/ipi.c      |   29 +++++++++++++++++++++++++++--
 arch/x86/kernel/apic/local.h    |    2 --
 arch/x86/kernel/apic/probe_32.c |   25 -------------------------
 3 files changed, 27 insertions(+), 29 deletions(-)

--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -5,6 +5,31 @@
 
 #include "local.h"
 
+#ifdef CONFIG_SMP
+#ifdef CONFIG_HOTPLUG_CPU
+#define DEFAULT_SEND_IPI	(1)
+#else
+#define DEFAULT_SEND_IPI	(0)
+#endif
+
+static int apic_ipi_shorthand_off __ro_after_init = DEFAULT_SEND_IPI;
+
+static __init int apic_ipi_shorthand(char *str)
+{
+	get_option(&str, &apic_ipi_shorthand_off);
+	return 1;
+}
+__setup("no_ipi_broadcast=", apic_ipi_shorthand);
+
+static int __init print_ipi_mode(void)
+{
+	pr_info("IPI shorthand broadcast: %s\n",
+		apic_ipi_shorthand_off ? "disabled" : "enabled");
+	return 0;
+}
+late_initcall(print_ipi_mode);
+#endif
+
 static inline int __prepare_ICR2(unsigned int mask)
 {
 	return SET_APIC_DEST_FIELD(mask);
@@ -203,7 +228,7 @@ void default_send_IPI_allbutself(int vec
 	if (num_online_cpus() < 2)
 		return;
 
-	if (no_broadcast || vector == NMI_VECTOR) {
+	if (apic_ipi_shorthand_off || vector == NMI_VECTOR) {
 		apic->send_IPI_mask_allbutself(cpu_online_mask, vector);
 	} else {
 		__default_send_IPI_shortcut(APIC_DEST_ALLBUT, vector);
@@ -212,7 +237,7 @@ void default_send_IPI_allbutself(int vec
 
 void default_send_IPI_all(int vector)
 {
-	if (no_broadcast || vector == NMI_VECTOR) {
+	if (apic_ipi_shorthand_off || vector == NMI_VECTOR) {
 		apic->send_IPI_mask(cpu_online_mask, vector);
 	} else {
 		__default_send_IPI_shortcut(APIC_DEST_ALLINC, vector);
--- a/arch/x86/kernel/apic/local.h
+++ b/arch/x86/kernel/apic/local.h
@@ -51,8 +51,6 @@ void default_send_IPI_single_phys(int cp
 void default_send_IPI_mask_sequence_phys(const struct cpumask *mask, int vector);
 void default_send_IPI_mask_allbutself_phys(const struct cpumask *mask, int vector);
 
-extern int no_broadcast;
-
 #ifdef CONFIG_X86_32
 void default_send_IPI_mask_sequence_logical(const struct cpumask *mask, int vector);
 void default_send_IPI_mask_allbutself_logical(const struct cpumask *mask, int vector);
--- a/arch/x86/kernel/apic/probe_32.c
+++ b/arch/x86/kernel/apic/probe_32.c
@@ -15,31 +15,6 @@
 
 #include "local.h"
 
-#ifdef CONFIG_HOTPLUG_CPU
-#define DEFAULT_SEND_IPI	(1)
-#else
-#define DEFAULT_SEND_IPI	(0)
-#endif
-
-int no_broadcast = DEFAULT_SEND_IPI;
-
-static __init int no_ipi_broadcast(char *str)
-{
-	get_option(&str, &no_broadcast);
-	pr_info("Using %s mode\n",
-		no_broadcast ? "No IPI Broadcast" : "IPI Broadcast");
-	return 1;
-}
-__setup("no_ipi_broadcast=", no_ipi_broadcast);
-
-static int __init print_ipi_mode(void)
-{
-	pr_info("Using IPI %s mode\n",
-		no_broadcast ? "No-Shortcut" : "Shortcut");
-	return 0;
-}
-late_initcall(print_ipi_mode);
-
 static int default_x86_32_early_logical_apicid(int cpu)
 {
 	return 1 << cpu;



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 15/18] x86/apic: Add static key to Control IPI shorthands
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (13 preceding siblings ...)
  2019-07-03 10:54 ` [patch 14/18] x86/apic: Move no_ipi_broadcast() out of 32bit Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 16/18] x86/apic: Convert 32bit to IPI shorthand static key Thomas Gleixner
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

The IPI shorthand functionality delivers IPI/NMI broadcasts to all CPUs in
the system. This can have similar side effects as the MCE broadcasting when
CPUs are waiting in the BIOS or are offlined.

The kernel tracks already the state of offlined CPUs whether they have been
brought up at least once so that the CR4 MCE bit is set to make sure that
MCE broadcasts can't brick the machine.

Utilize that information and compare it to the cpu_present_mask. If all
present CPUs have been brought up at least once then the broadcast side
effect is mitigated by disabling regular interrupt/IPI delivery in the APIC
itself and by the cpu_ignore_nmi check at the begin of the NMI handler.

Use a static key to switch between broadcasting via shorthands or sending
the IPI/NMI one by one.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/apic.h  |    2 ++
 arch/x86/kernel/apic/ipi.c   |   24 +++++++++++++++++++++++-
 arch/x86/kernel/apic/local.h |    6 ++++++
 arch/x86/kernel/cpu/common.c |    2 ++
 4 files changed, 33 insertions(+), 1 deletion(-)

--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -505,8 +505,10 @@ extern int default_check_phys_apicid_pre
 
 #ifdef CONFIG_SMP
 bool apic_id_is_primary_thread(unsigned int id);
+void apic_smt_update(void);
 #else
 static inline bool apic_id_is_primary_thread(unsigned int id) { return false; }
+static inline void apic_smt_update(void) { }
 #endif
 
 extern void irq_enter(void);
--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -5,6 +5,8 @@
 
 #include "local.h"
 
+DEFINE_STATIC_KEY_FALSE(apic_use_ipi_shorthand);
+
 #ifdef CONFIG_SMP
 #ifdef CONFIG_HOTPLUG_CPU
 #define DEFAULT_SEND_IPI	(1)
@@ -28,7 +30,27 @@ static int __init print_ipi_mode(void)
 	return 0;
 }
 late_initcall(print_ipi_mode);
-#endif
+
+void apic_smt_update(void)
+{
+	/*
+	 * Do not switch to broadcast mode if:
+	 * - Disabled on the command line
+	 * - Only a single CPU is online
+	 * - Not all present CPUs have been at least booted once
+	 *
+	 * The latter is important as the local APIC might be in some
+	 * random state and a broadcast might cause havoc. That's
+	 * especially true for NMI broadcasting.
+	 */
+	if (apic_ipi_shorthand_off || num_online_cpus() == 1 ||
+	    !cpumask_equal(cpu_present_mask, &cpus_booted_once_mask)) {
+		static_branch_disable(&apic_use_ipi_shorthand);
+	} else {
+		static_branch_enable(&apic_use_ipi_shorthand);
+	}
+}
+#endif /* CONFIG_SMP */
 
 static inline int __prepare_ICR2(unsigned int mask)
 {
--- a/arch/x86/kernel/apic/local.h
+++ b/arch/x86/kernel/apic/local.h
@@ -7,6 +7,9 @@
  * (c) 1998-99, 2000 Ingo Molnar <mingo@redhat.com>
  * (c) 2002,2003 Andi Kleen, SuSE Labs.
  */
+
+#include <linux/jump_label.h>
+
 #include <asm/apic.h>
 
 /* APIC flat 64 */
@@ -22,6 +25,9 @@ int x2apic_phys_pkg_id(int initial_apici
 void x2apic_send_IPI_self(int vector);
 
 /* IPI */
+
+DECLARE_STATIC_KEY_FALSE(apic_use_ipi_shorthand);
+
 static inline unsigned int __prepare_ICR(unsigned int shortcut, int vector,
 					 unsigned int dest)
 {
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1888,4 +1888,6 @@ void arch_smt_update(void)
 {
 	/* Handle the speculative execution misfeatures */
 	cpu_bugs_smt_update();
+	/* Check whether IPI broadcasting can be enabled */
+	apic_smt_update();
 }



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 16/18] x86/apic: Convert 32bit to IPI shorthand static key
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (14 preceding siblings ...)
  2019-07-03 10:54 ` [patch 15/18] x86/apic: Add static key to Control IPI shorthands Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 18:06   ` Nadav Amit
  2019-07-03 10:54 ` [patch 17/18] x86/apic/flat64: Add conditional IPI shorthands support Thomas Gleixner
  2019-07-03 10:54 ` [patch 18/18] x86/apic/x2apic: " Thomas Gleixner
  17 siblings, 1 reply; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

Broadcast now depends on the fact that all present CPUs have been booted
once so handling broadcast IPIs is not doing any harm. In case that a CPU
is offline, it does not react to regular IPIs and the NMI handler returns
early.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/apic/ipi.c |   12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -8,13 +8,7 @@
 DEFINE_STATIC_KEY_FALSE(apic_use_ipi_shorthand);
 
 #ifdef CONFIG_SMP
-#ifdef CONFIG_HOTPLUG_CPU
-#define DEFAULT_SEND_IPI	(1)
-#else
-#define DEFAULT_SEND_IPI	(0)
-#endif
-
-static int apic_ipi_shorthand_off __ro_after_init = DEFAULT_SEND_IPI;
+static int apic_ipi_shorthand_off __ro_after_init;
 
 static __init int apic_ipi_shorthand(char *str)
 {
@@ -250,7 +244,7 @@ void default_send_IPI_allbutself(int vec
 	if (num_online_cpus() < 2)
 		return;
 
-	if (apic_ipi_shorthand_off || vector == NMI_VECTOR) {
+	if (static_branch_likely(&apic_use_ipi_shorthand)) {
 		apic->send_IPI_mask_allbutself(cpu_online_mask, vector);
 	} else {
 		__default_send_IPI_shortcut(APIC_DEST_ALLBUT, vector);
@@ -259,7 +253,7 @@ void default_send_IPI_allbutself(int vec
 
 void default_send_IPI_all(int vector)
 {
-	if (apic_ipi_shorthand_off || vector == NMI_VECTOR) {
+	if (static_branch_likely(&apic_use_ipi_shorthand)) {
 		apic->send_IPI_mask(cpu_online_mask, vector);
 	} else {
 		__default_send_IPI_shortcut(APIC_DEST_ALLINC, vector);



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 17/18] x86/apic/flat64: Add conditional IPI shorthands support
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (15 preceding siblings ...)
  2019-07-03 10:54 ` [patch 16/18] x86/apic: Convert 32bit to IPI shorthand static key Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  2019-07-03 10:54 ` [patch 18/18] x86/apic/x2apic: " Thomas Gleixner
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

Use the shorthand broadcast delivery if the static key controlling it is
enabled. If not use the regular one by one IPI mechanism.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/apic/apic_flat_64.c |   24 +++++++++++++++---------
 1 file changed, 15 insertions(+), 9 deletions(-)

--- a/arch/x86/kernel/apic/apic_flat_64.c
+++ b/arch/x86/kernel/apic/apic_flat_64.c
@@ -80,7 +80,9 @@ static void flat_send_IPI_allbutself(int
 {
 	int cpu = smp_processor_id();
 
-	if (IS_ENABLED(CONFIG_HOTPLUG_CPU) || vector == NMI_VECTOR) {
+	if (static_branch_likely(&apic_use_ipi_shorthand)) {
+		__default_send_IPI_shortcut(APIC_DEST_ALLBUT, vector);
+	} else {
 		if (!cpumask_equal(cpu_online_mask, cpumask_of(cpu))) {
 			unsigned long mask = cpumask_bits(cpu_online_mask)[0];
 
@@ -89,18 +91,15 @@ static void flat_send_IPI_allbutself(int
 
 			_flat_send_IPI_mask(mask, vector);
 		}
-	} else if (num_online_cpus() > 1) {
-		__default_send_IPI_shortcut(APIC_DEST_ALLBUT, vector);
 	}
 }
 
 static void flat_send_IPI_all(int vector)
 {
-	if (vector == NMI_VECTOR) {
-		flat_send_IPI_mask(cpu_online_mask, vector);
-	} else {
+	if (static_branch_likely(&apic_use_ipi_shorthand))
 		__default_send_IPI_shortcut(APIC_DEST_ALLINC, vector);
-	}
+	else
+		flat_send_IPI_mask(cpu_online_mask, vector);
 }
 
 static unsigned int flat_get_apic_id(unsigned long x)
@@ -218,12 +217,19 @@ static void physflat_init_apic_ldr(void)
 
 static void physflat_send_IPI_allbutself(int vector)
 {
-	default_send_IPI_mask_allbutself_phys(cpu_online_mask, vector);
+	if (static_branch_likely(&apic_use_ipi_shorthand)) {
+		__default_send_IPI_shortcut(APIC_DEST_ALLBUT, vector);
+	} else {
+		default_send_IPI_mask_allbutself_phys(cpu_online_mask, vector);
+	}
 }
 
 static void physflat_send_IPI_all(int vector)
 {
-	default_send_IPI_mask_sequence_phys(cpu_online_mask, vector);
+	if (static_branch_likely(&apic_use_ipi_shorthand))
+		__default_send_IPI_shortcut(APIC_DEST_ALLBUT, vector);
+	else
+		default_send_IPI_mask_sequence_phys(cpu_online_mask, vector);
 }
 
 static int physflat_probe(void)



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [patch 18/18] x86/apic/x2apic: Add conditional IPI shorthands support
  2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
                   ` (16 preceding siblings ...)
  2019-07-03 10:54 ` [patch 17/18] x86/apic/flat64: Add conditional IPI shorthands support Thomas Gleixner
@ 2019-07-03 10:54 ` Thomas Gleixner
  17 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 10:54 UTC (permalink / raw)
  To: LKML; +Cc: x86, Nadav Amit, Ricardo Neri, Stephane Eranian, Feng Tang

Use the shorthand broadcast delivery if the static key controlling it is
enabled. If not use the regular one by one IPI mechanism.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/apic/local.h          |    1 +
 arch/x86/kernel/apic/x2apic_cluster.c |   10 ++++++++--
 arch/x86/kernel/apic/x2apic_phys.c    |   18 ++++++++++++++++--
 3 files changed, 25 insertions(+), 4 deletions(-)

--- a/arch/x86/kernel/apic/local.h
+++ b/arch/x86/kernel/apic/local.h
@@ -23,6 +23,7 @@ unsigned int x2apic_get_apic_id(unsigned
 u32 x2apic_set_apic_id(unsigned int id);
 int x2apic_phys_pkg_id(int initial_apicid, int index_msb);
 void x2apic_send_IPI_self(int vector);
+void __x2apic_send_IPI_shorthand(int vector, u32 which);
 
 /* IPI */
 
--- a/arch/x86/kernel/apic/x2apic_cluster.c
+++ b/arch/x86/kernel/apic/x2apic_cluster.c
@@ -82,12 +82,18 @@ x2apic_send_IPI_mask_allbutself(const st
 
 static void x2apic_send_IPI_allbutself(int vector)
 {
-	__x2apic_send_IPI_mask(cpu_online_mask, vector, APIC_DEST_ALLBUT);
+	if (static_branch_likely(&apic_use_ipi_shorthand))
+		__x2apic_send_IPI_shorthand(vector, APIC_DEST_ALLBUT);
+	else
+		__x2apic_send_IPI_mask(cpu_online_mask, vector, APIC_DEST_ALLBUT);
 }
 
 static void x2apic_send_IPI_all(int vector)
 {
-	__x2apic_send_IPI_mask(cpu_online_mask, vector, APIC_DEST_ALLINC);
+	if (static_branch_likely(&apic_use_ipi_shorthand))
+		__x2apic_send_IPI_shorthand(vector, APIC_DEST_ALLINC);
+	else
+		__x2apic_send_IPI_mask(cpu_online_mask, vector, APIC_DEST_ALLINC);
 }
 
 static u32 x2apic_calc_apicid(unsigned int cpu)
--- a/arch/x86/kernel/apic/x2apic_phys.c
+++ b/arch/x86/kernel/apic/x2apic_phys.c
@@ -75,12 +75,18 @@ static void
 
 static void x2apic_send_IPI_allbutself(int vector)
 {
-	__x2apic_send_IPI_mask(cpu_online_mask, vector, APIC_DEST_ALLBUT);
+	if (static_branch_likely(&apic_use_ipi_shorthand))
+		__x2apic_send_IPI_shorthand(vector, APIC_DEST_ALLBUT);
+	else
+		__x2apic_send_IPI_mask(cpu_online_mask, vector, APIC_DEST_ALLBUT);
 }
 
 static void x2apic_send_IPI_all(int vector)
 {
-	__x2apic_send_IPI_mask(cpu_online_mask, vector, APIC_DEST_ALLINC);
+	if (static_branch_likely(&apic_use_ipi_shorthand))
+		__x2apic_send_IPI_shorthand(vector, APIC_DEST_ALLINC);
+	else
+		__x2apic_send_IPI_mask(cpu_online_mask, vector, APIC_DEST_ALLINC);
 }
 
 static void init_x2apic_ldr(void)
@@ -112,6 +118,14 @@ void __x2apic_send_IPI_dest(unsigned int
 	native_x2apic_icr_write(cfg, apicid);
 }
 
+void __x2apic_send_IPI_shorthand(int vector, u32 which)
+{
+	unsigned long cfg = __prepare_ICR(which, vector, 0);
+
+	x2apic_wrmsr_fence();
+	native_x2apic_icr_write(cfg, 0);
+}
+
 unsigned int x2apic_get_apic_id(unsigned long id)
 {
 	return id;



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [patch 16/18] x86/apic: Convert 32bit to IPI shorthand static key
  2019-07-03 10:54 ` [patch 16/18] x86/apic: Convert 32bit to IPI shorthand static key Thomas Gleixner
@ 2019-07-03 18:06   ` Nadav Amit
  2019-07-03 20:34     ` Thomas Gleixner
  0 siblings, 1 reply; 23+ messages in thread
From: Nadav Amit @ 2019-07-03 18:06 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: LKML, x86, Ricardo Neri, Stephane Eranian, Feng Tang

> On Jul 3, 2019, at 3:54 AM, Thomas Gleixner <tglx@linutronix.de> wrote:
> 
> Broadcast now depends on the fact that all present CPUs have been booted
> once so handling broadcast IPIs is not doing any harm. In case that a CPU
> is offline, it does not react to regular IPIs and the NMI handler returns
> early.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
> arch/x86/kernel/apic/ipi.c |   12 +++---------
> 1 file changed, 3 insertions(+), 9 deletions(-)
> 
> --- a/arch/x86/kernel/apic/ipi.c
> +++ b/arch/x86/kernel/apic/ipi.c
> @@ -8,13 +8,7 @@
> DEFINE_STATIC_KEY_FALSE(apic_use_ipi_shorthand);
> 
> #ifdef CONFIG_SMP
> -#ifdef CONFIG_HOTPLUG_CPU
> -#define DEFAULT_SEND_IPI	(1)
> -#else
> -#define DEFAULT_SEND_IPI	(0)
> -#endif
> -
> -static int apic_ipi_shorthand_off __ro_after_init = DEFAULT_SEND_IPI;
> +static int apic_ipi_shorthand_off __ro_after_init;
> 
> static __init int apic_ipi_shorthand(char *str)
> {
> @@ -250,7 +244,7 @@ void default_send_IPI_allbutself(int vec
> 	if (num_online_cpus() < 2)
> 		return;
> 
> -	if (apic_ipi_shorthand_off || vector == NMI_VECTOR) {
> +	if (static_branch_likely(&apic_use_ipi_shorthand)) {
> 		apic->send_IPI_mask_allbutself(cpu_online_mask, vector);
> 	} else {
> 		__default_send_IPI_shortcut(APIC_DEST_ALLBUT, vector);
> @@ -259,7 +253,7 @@ void default_send_IPI_allbutself(int vec
> 
> void default_send_IPI_all(int vector)
> {
> -	if (apic_ipi_shorthand_off || vector == NMI_VECTOR) {
> +	if (static_branch_likely(&apic_use_ipi_shorthand)) {
> 		apic->send_IPI_mask(cpu_online_mask, vector);
> 	} else {
> 		__default_send_IPI_shortcut(APIC_DEST_ALLINC, vector);

It may be better to check the static-key in native_send_call_func_ipi() (and
other callers if there are any), and remove all the other checks in
default_send_IPI_all(), x2apic_send_IPI_mask_allbutself(), etc.

This would allow to remove potentially unnecessary checks in
native_send_call_func_ipi()I also have this patch I still did not send to
slightly improve the test in native_send_call_func_ipi().

-- >8 --

From: Nadav Amit <namit@vmware.com>
Date: Fri, 7 Jun 2019 15:11:44 -0700
Subject: [PATCH] x86/smp: Better check of allbutself

Introduce for_each_cpu_and_not() for this matter.

Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/kernel/smp.c             | 22 +++++++++++-----------
 include/asm-generic/bitops/find.h | 17 +++++++++++++++++
 include/linux/cpumask.h           | 17 +++++++++++++++++
 lib/cpumask.c                     | 20 ++++++++++++++++++++
 lib/find_bit.c                    | 21 ++++++++++++++++++---
 5 files changed, 83 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 96421f97e75c..7972ab593397 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -136,23 +136,23 @@ void native_send_call_func_single_ipi(int cpu)
 
 void native_send_call_func_ipi(const struct cpumask *mask)
 {
-	cpumask_var_t allbutself;
-
-	if (!alloc_cpumask_var(&allbutself, GFP_ATOMIC)) {
-		apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
-		return;
+	int cpu, this_cpu = smp_processor_id();
+	bool allbutself = true;
+	bool self = false;
+
+	for_each_cpu_and_not(cpu, cpu_online_mask, mask) {
+		if (cpu != this_cpu) {
+			allbutself = false;
+			break;
+		}
+		self = true;
 	}
 
-	cpumask_copy(allbutself, cpu_online_mask);
-	__cpumask_clear_cpu(smp_processor_id(), allbutself);
-
-	if (cpumask_equal(mask, allbutself) &&
+	if (allbutself && !self &&
 	    cpumask_equal(cpu_online_mask, cpu_callout_mask))
 		apic->send_IPI_allbutself(CALL_FUNCTION_VECTOR);
 	else
 		apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
-
-	free_cpumask_var(allbutself);
 }
 
 static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
diff --git a/include/asm-generic/bitops/find.h b/include/asm-generic/bitops/find.h
index 8a1ee10014de..e5f19eec2737 100644
--- a/include/asm-generic/bitops/find.h
+++ b/include/asm-generic/bitops/find.h
@@ -32,6 +32,23 @@ extern unsigned long find_next_and_bit(const unsigned long *addr1,
 		unsigned long offset);
 #endif
 
+#ifndef find_next_and_bit
+/**
+ * find_next_and_not_bit - find the next set bit in the the first region
+ *			   which is clear on the second
+ * @addr1: The first address to base the search on
+ * @addr2: The second address to base the search on
+ * @offset: The bitnumber to start searching at
+ * @size: The bitmap size in bits
+ *
+ * Returns the bit number for the next set bit
+ * If no bits are set, returns @size.
+ */
+extern unsigned long find_next_and_not_bit(const unsigned long *addr1,
+		const unsigned long *addr2, unsigned long size,
+		unsigned long offset);
+#endif
+
 #ifndef find_next_zero_bit
 /**
  * find_next_zero_bit - find the next cleared bit in a memory region
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 693124900f0a..4648add54fad 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -229,6 +229,7 @@ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp)
 }
 
 int cpumask_next_and(int n, const struct cpumask *, const struct cpumask *);
+int __pure cpumask_next_and_not(int n, const struct cpumask *, const struct cpumask *);
 int cpumask_any_but(const struct cpumask *mask, unsigned int cpu);
 unsigned int cpumask_local_spread(unsigned int i, int node);
 
@@ -291,6 +292,22 @@ extern int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool
 	for ((cpu) = -1;						\
 		(cpu) = cpumask_next_and((cpu), (mask), (and)),		\
 		(cpu) < nr_cpu_ids;)
+
+
+/**
+ * for_each_cpu_and_not - iterate over every cpu in @mask & ~@and_not
+ * @cpu: the (optionally unsigned) integer iterator
+ * @mask: the first cpumask pointer
+ * @and_not: the second cpumask pointer
+ *
+ * After the loop, cpu is >= nr_cpu_ids.
+ */
+#define for_each_cpu_and_not(cpu, mask, and_not)			\
+	for ((cpu) = -1;						\
+		(cpu) = cpumask_next_and_not((cpu), (mask), (and_not)),	\
+		(cpu) < nr_cpu_ids;)
+
+
 #endif /* SMP */
 
 #define CPU_BITS_NONE						\
diff --git a/lib/cpumask.c b/lib/cpumask.c
index 0cb672eb107c..59c98f55c308 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -42,6 +42,26 @@ int cpumask_next_and(int n, const struct cpumask *src1p,
 }
 EXPORT_SYMBOL(cpumask_next_and);
 
+/**
+ * cpumask_next_and_not - get the next cpu in *src1p & ~*src2p
+ * @n: the cpu prior to the place to search (ie. return will be > @n)
+ * @src1p: the first cpumask pointer
+ * @src2p: the second cpumask pointer
+ *
+ * Returns >= nr_cpu_ids if no further cpus set in both.
+ */
+int cpumask_next_and_not(int n, const struct cpumask *src1p,
+		     const struct cpumask *src2p)
+{
+	/* -1 is a legal arg here. */
+	if (n != -1)
+		cpumask_check(n);
+	return find_next_and_not_bit(cpumask_bits(src1p), cpumask_bits(src2p),
+		nr_cpumask_bits, n + 1);
+}
+EXPORT_SYMBOL(cpumask_next_and_not);
+
+
 /**
  * cpumask_any_but - return a "random" in a cpumask, but not this one.
  * @mask: the cpumask to search
diff --git a/lib/find_bit.c b/lib/find_bit.c
index 5c51eb45178a..7bd2c567287e 100644
--- a/lib/find_bit.c
+++ b/lib/find_bit.c
@@ -23,7 +23,7 @@
 /*
  * This is a common helper function for find_next_bit, find_next_zero_bit, and
  * find_next_and_bit. The differences are:
- *  - The "invert" argument, which is XORed with each fetched word before
+ *  - The "invert" argument, which is XORed with each fetched first word before
  *    searching it for one bits.
  *  - The optional "addr2", which is anded with "addr1" if present.
  */
@@ -37,9 +37,9 @@ static inline unsigned long _find_next_bit(const unsigned long *addr1,
 		return nbits;
 
 	tmp = addr1[start / BITS_PER_LONG];
+	tmp ^= invert;
 	if (addr2)
 		tmp &= addr2[start / BITS_PER_LONG];
-	tmp ^= invert;
 
 	/* Handle 1st word. */
 	tmp &= BITMAP_FIRST_WORD_MASK(start);
@@ -51,9 +51,9 @@ static inline unsigned long _find_next_bit(const unsigned long *addr1,
 			return nbits;
 
 		tmp = addr1[start / BITS_PER_LONG];
+		tmp ^= invert;
 		if (addr2)
 			tmp &= addr2[start / BITS_PER_LONG];
-		tmp ^= invert;
 	}
 
 	return min(start + __ffs(tmp), nbits);
@@ -91,6 +91,21 @@ unsigned long find_next_and_bit(const unsigned long *addr1,
 EXPORT_SYMBOL(find_next_and_bit);
 #endif
 
+#if !defined(find_next_and_not_bit)
+unsigned long find_next_and_not_bit(const unsigned long *addr1,
+		const unsigned long *addr2, unsigned long size,
+		unsigned long offset)
+{
+	/*
+	 * Switching addr1 and addr2, since the first argument is the one that
+	 * will be inverted.
+	 */
+	return _find_next_bit(addr2, addr1, size, offset, ~0UL);
+}
+EXPORT_SYMBOL(find_next_and_not_bit);
+#endif
+
+
 #ifndef find_first_bit
 /*
  * Find the first set bit in a memory region.
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [patch 16/18] x86/apic: Convert 32bit to IPI shorthand static key
  2019-07-03 18:06   ` Nadav Amit
@ 2019-07-03 20:34     ` Thomas Gleixner
  2019-07-03 21:14       ` Nadav Amit
  0 siblings, 1 reply; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 20:34 UTC (permalink / raw)
  To: Nadav Amit; +Cc: LKML, x86, Ricardo Neri, Stephane Eranian, Feng Tang

Nadav,

On Wed, 3 Jul 2019, Nadav Amit wrote:
> > On Jul 3, 2019, at 3:54 AM, Thomas Gleixner <tglx@linutronix.de> wrote:
> > void default_send_IPI_all(int vector)
> > {
> > -	if (apic_ipi_shorthand_off || vector == NMI_VECTOR) {
> > +	if (static_branch_likely(&apic_use_ipi_shorthand)) {
> > 		apic->send_IPI_mask(cpu_online_mask, vector);
> > 	} else {
> > 		__default_send_IPI_shortcut(APIC_DEST_ALLINC, vector);
> 
> It may be better to check the static-key in native_send_call_func_ipi() (and
> other callers if there are any), and remove all the other checks in
> default_send_IPI_all(), x2apic_send_IPI_mask_allbutself(), etc.

That makes sense. Should have thought about that myself, but hunting that
APIC emulation issue was affecting my brain obviously :)
 
>  void native_send_call_func_ipi(const struct cpumask *mask)
>  {
> -	cpumask_var_t allbutself;
> -
> -	if (!alloc_cpumask_var(&allbutself, GFP_ATOMIC)) {
> -		apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
> -		return;
> +	int cpu, this_cpu = smp_processor_id();
> +	bool allbutself = true;
> +	bool self = false;
> +
> +	for_each_cpu_and_not(cpu, cpu_online_mask, mask) {
> +
> +		if (cpu != this_cpu) {
> +			allbutself = false;
> +			break;
> +		}
> +		self = true;

That accumulates to a large iteration in the worst case. 

>  	}
>  
> -	cpumask_copy(allbutself, cpu_online_mask);
> -	__cpumask_clear_cpu(smp_processor_id(), allbutself);
> -
> -	if (cpumask_equal(mask, allbutself) &&
> +	if (allbutself && !self &&
>  	    cpumask_equal(cpu_online_mask, cpu_callout_mask))

Hmm. I overlooked that one. Need to take a deeper look.

>  		apic->send_IPI_allbutself(CALL_FUNCTION_VECTOR);
>  	else
>  		apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);

Let me think about it for a while.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [patch 16/18] x86/apic: Convert 32bit to IPI shorthand static key
  2019-07-03 20:34     ` Thomas Gleixner
@ 2019-07-03 21:14       ` Nadav Amit
  2019-07-03 21:30         ` Thomas Gleixner
  0 siblings, 1 reply; 23+ messages in thread
From: Nadav Amit @ 2019-07-03 21:14 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: LKML, x86, Ricardo Neri, Stephane Eranian, Feng Tang

> On Jul 3, 2019, at 1:34 PM, Thomas Gleixner <tglx@linutronix.de> wrote:
> 
> Nadav,
> 
> On Wed, 3 Jul 2019, Nadav Amit wrote:
>>> On Jul 3, 2019, at 3:54 AM, Thomas Gleixner <tglx@linutronix.de> wrote:
>>> void default_send_IPI_all(int vector)
>>> {
>>> -	if (apic_ipi_shorthand_off || vector == NMI_VECTOR) {
>>> +	if (static_branch_likely(&apic_use_ipi_shorthand)) {
>>> 		apic->send_IPI_mask(cpu_online_mask, vector);
>>> 	} else {
>>> 		__default_send_IPI_shortcut(APIC_DEST_ALLINC, vector);
>> 
>> It may be better to check the static-key in native_send_call_func_ipi() (and
>> other callers if there are any), and remove all the other checks in
>> default_send_IPI_all(), x2apic_send_IPI_mask_allbutself(), etc.
> 
> That makes sense. Should have thought about that myself, but hunting that
> APIC emulation issue was affecting my brain obviously :)

Well, if you used VMware and not KVM... ;-)

(allow me to reemphasize that I am joking and save myself from spam)

>> void native_send_call_func_ipi(const struct cpumask *mask)
>> {
>> -	cpumask_var_t allbutself;
>> -
>> -	if (!alloc_cpumask_var(&allbutself, GFP_ATOMIC)) {
>> -		apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
>> -		return;
>> +	int cpu, this_cpu = smp_processor_id();
>> +	bool allbutself = true;
>> +	bool self = false;
>> +
>> +	for_each_cpu_and_not(cpu, cpu_online_mask, mask) {
>> +
>> +		if (cpu != this_cpu) {
>> +			allbutself = false;
>> +			break;
>> +		}
>> +		self = true;
> 
> That accumulates to a large iteration in the worst case. 

I don’t understand why. There should be at most two iterations - one for
self and one for another core. So _find_next_bit() will be called at most
twice. _find_next_bit() has its own loop, but I don’t think overall it is as
bad as calling alloc_cpumask_var(), cpumask_copy() and cpumask_equal(),
which also have loops.

I don’t have numbers (and I doubt they are very significant), but the cpumask
allocation showed when I was profiling my microbenchmark.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [patch 16/18] x86/apic: Convert 32bit to IPI shorthand static key
  2019-07-03 21:14       ` Nadav Amit
@ 2019-07-03 21:30         ` Thomas Gleixner
  0 siblings, 0 replies; 23+ messages in thread
From: Thomas Gleixner @ 2019-07-03 21:30 UTC (permalink / raw)
  To: Nadav Amit; +Cc: LKML, x86, Ricardo Neri, Stephane Eranian, Feng Tang

[-- Attachment #1: Type: text/plain, Size: 2224 bytes --]

On Wed, 3 Jul 2019, Nadav Amit wrote:
> > On Jul 3, 2019, at 1:34 PM, Thomas Gleixner <tglx@linutronix.de> wrote:
> > On Wed, 3 Jul 2019, Nadav Amit wrote:
> >>> On Jul 3, 2019, at 3:54 AM, Thomas Gleixner <tglx@linutronix.de> wrote:
> >>> void default_send_IPI_all(int vector)
> >>> {
> >>> -	if (apic_ipi_shorthand_off || vector == NMI_VECTOR) {
> >>> +	if (static_branch_likely(&apic_use_ipi_shorthand)) {
> >>> 		apic->send_IPI_mask(cpu_online_mask, vector);
> >>> 	} else {
> >>> 		__default_send_IPI_shortcut(APIC_DEST_ALLINC, vector);
> >> 
> >> It may be better to check the static-key in native_send_call_func_ipi() (and
> >> other callers if there are any), and remove all the other checks in
> >> default_send_IPI_all(), x2apic_send_IPI_mask_allbutself(), etc.
> > 
> > That makes sense. Should have thought about that myself, but hunting that
> > APIC emulation issue was affecting my brain obviously :)
> 
> Well, if you used VMware and not KVM... ;-)

Then I would have hunted some other bug probably :)
 
> >> void native_send_call_func_ipi(const struct cpumask *mask)
> >> {
> >> -	cpumask_var_t allbutself;
> >> -
> >> -	if (!alloc_cpumask_var(&allbutself, GFP_ATOMIC)) {
> >> -		apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
> >> -		return;
> >> +	int cpu, this_cpu = smp_processor_id();
> >> +	bool allbutself = true;
> >> +	bool self = false;
> >> +
> >> +	for_each_cpu_and_not(cpu, cpu_online_mask, mask) {
> >> +
> >> +		if (cpu != this_cpu) {
> >> +			allbutself = false;
> >> +			break;
> >> +		}
> >> +		self = true;
> > 
> > That accumulates to a large iteration in the worst case. 
> 
> I don’t understand why. There should be at most two iterations - one for
> self and one for another core. So _find_next_bit() will be called at most
> twice. _find_next_bit() has its own loop, but I don’t think overall it is as

Indeed, misread the code and right the bit search should be fast.

> bad as calling alloc_cpumask_var(), cpumask_copy() and cpumask_equal(),
> which also have loops.
>
> I don’t have numbers (and I doubt they are very significant), but the cpumask
> allocation showed when I was profiling my microbenchmark.

Yes, that alloc/free part is completely bogus.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2019-07-03 21:30 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-03 10:54 [patch 00/18] x86/apic: Support for IPI shorthands Thomas Gleixner
2019-07-03 10:54 ` [patch 01/18] x86/apic: Invoke perf_events_lapic_init() after enabling APIC Thomas Gleixner
2019-07-03 10:54 ` [patch 02/18] x86/apic: Soft disable APIC before initializing it Thomas Gleixner
2019-07-03 10:54 ` [patch 03/18] x86/apic: Make apic_pending_intr_clear() more robust Thomas Gleixner
2019-07-03 10:54 ` [patch 04/18] x86/apic: Move IPI inlines into ipi.c Thomas Gleixner
2019-07-03 10:54 ` [patch 05/18] x86/apic: Cleanup the include maze Thomas Gleixner
2019-07-03 10:54 ` [patch 06/18] x86/apic: Move ipi header into apic directory Thomas Gleixner
2019-07-03 10:54 ` [patch 07/18] x86/apic: Move apic_flat_64 " Thomas Gleixner
2019-07-03 10:54 ` [patch 08/18] x86/apic: Consolidate the apic local headers Thomas Gleixner
2019-07-03 10:54 ` [patch 09/18] smp/hotplug: Track booted once CPUs in a cpumask Thomas Gleixner
2019-07-03 10:54 ` [patch 10/18] x86/cpu: Move arch_smt_update() to a neutral place Thomas Gleixner
2019-07-03 10:54 ` [patch 11/18] x86/hotplug: Silence APIC and NMI when CPU is dead Thomas Gleixner
2019-07-03 10:54 ` [patch 12/18] x86/apic: Remove dest argument from __default_send_IPI_shortcut() Thomas Gleixner
2019-07-03 10:54 ` [patch 13/18] x86/apic: Add NMI_VECTOR wait to IPI shorthand Thomas Gleixner
2019-07-03 10:54 ` [patch 14/18] x86/apic: Move no_ipi_broadcast() out of 32bit Thomas Gleixner
2019-07-03 10:54 ` [patch 15/18] x86/apic: Add static key to Control IPI shorthands Thomas Gleixner
2019-07-03 10:54 ` [patch 16/18] x86/apic: Convert 32bit to IPI shorthand static key Thomas Gleixner
2019-07-03 18:06   ` Nadav Amit
2019-07-03 20:34     ` Thomas Gleixner
2019-07-03 21:14       ` Nadav Amit
2019-07-03 21:30         ` Thomas Gleixner
2019-07-03 10:54 ` [patch 17/18] x86/apic/flat64: Add conditional IPI shorthands support Thomas Gleixner
2019-07-03 10:54 ` [patch 18/18] x86/apic/x2apic: " Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).