All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] arm64: pseudo-nmi: elide code when CONFIG_ARM64_PSEUDO_NMI=n
@ 2023-01-25 16:38 Mark Rutland
  2023-01-25 16:38 ` [PATCH v2 1/5] arm64: rename ARM64_HAS_SYSREG_GIC_CPUIF to ARM64_HAS_GIC_CPUIF_SYSREGS Mark Rutland
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Mark Rutland @ 2023-01-25 16:38 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: broonie, catalin.marinas, mark.rutland, maz, will

This series addresses a couple of sub-optimal code generation issues with
arm64's pseudo-nmi support code:

* Even when CONFIG_ARM64_PSEUDO_NMI=n, we generate alternative code
  sequences and alt_instr entries which will never be used. This series
  reworks the irqflags code to use alternative branches (with an
  IS_ENABLED() check), which allows the alternatives to be elided when
  CONFIG_ARM64_PSEUDO_NMI=n.

* When PMHE is eanbled in HW, we must synchronize PMR updates using a
  DSB SY. We take pains to avoid this using a static key to skip the
  barrier when PMHE is not in use, but this results in unnecessarily
  branchy code. This series replaces the static key with an alternative,
  allowing the DSB SY to be relaxed to a NOP.

These changes make a defconfig kernel a little smaller, and does not
adversely affect the size of a CONFIG_ARM64_PSEUDO_NMI=y kernel. The
structural changes will also make it easier for a subsequent series to
rework the irqflag and daifflag management, addressing some
long-standing edge cases and preparing for ARMv8.8-A's FEAT_NMI.

I've tested this series under a QEM KVM VM on a ThunderX2 host, and a
QEMU TCG VM on an x86_64 host. I've tested with and without pseudo-NMI
support enabled, and with pseudo-NMI debug and lockdep enabled, using
perf record in system-wide mode.

Since v1 [1]:
* Rename ARM64_HAS_GIC_PRIO_NO_PMHE to ARM64_HAS_GIC_PRIO_RELAXED_SYNC
* Add explanatory comments for cpucap dependencies
* Add patch making ARM64_HAS_GIC_PRIO_MASKING depend on
  ARM64_HAS_GIC_PRIO_MASKING

[1] https://lore.kernel.org/linux-arm-kernel/20230123124042.718743-1-mark.rutland@arm.com/

Thanks,
Mark.

Mark Rutland (5):
  arm64: rename ARM64_HAS_SYSREG_GIC_CPUIF to
    ARM64_HAS_GIC_CPUIF_SYSREGS
  arm64: rename ARM64_HAS_IRQ_PRIO_MASKING to ARM64_HAS_GIC_PRIO_MASKING
  arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on
    ARM64_HAS_GIC_PRIO_MASKING
  arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap
  arm64: irqflags: use alternative branches for pseudo-NMI logic

 arch/arm/include/asm/arch_gicv3.h   |   5 +
 arch/arm64/include/asm/arch_gicv3.h |   5 +
 arch/arm64/include/asm/barrier.h    |  11 +-
 arch/arm64/include/asm/cpufeature.h |   2 +-
 arch/arm64/include/asm/irqflags.h   | 183 +++++++++++++++++++---------
 arch/arm64/include/asm/ptrace.h     |   2 +-
 arch/arm64/kernel/cpufeature.c      |  51 ++++++--
 arch/arm64/kernel/entry.S           |  25 ++--
 arch/arm64/kernel/image-vars.h      |   2 -
 arch/arm64/tools/cpucaps            |   5 +-
 drivers/irqchip/irq-gic-v3.c        |  19 +--
 drivers/irqchip/irq-gic.c           |   2 +-
 12 files changed, 207 insertions(+), 105 deletions(-)

-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 1/5] arm64: rename ARM64_HAS_SYSREG_GIC_CPUIF to ARM64_HAS_GIC_CPUIF_SYSREGS
  2023-01-25 16:38 [PATCH v2 0/5] arm64: pseudo-nmi: elide code when CONFIG_ARM64_PSEUDO_NMI=n Mark Rutland
@ 2023-01-25 16:38 ` Mark Rutland
  2023-01-25 16:38 ` [PATCH v2 2/5] arm64: rename ARM64_HAS_IRQ_PRIO_MASKING to ARM64_HAS_GIC_PRIO_MASKING Mark Rutland
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Mark Rutland @ 2023-01-25 16:38 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: broonie, catalin.marinas, mark.rutland, maz, will

Subsequent patches will add more GIC-related cpucaps. When we do so, it
would be nice to give them a consistent HAS_GIC_* prefix.

In preparation for doing so, this patch renames the existing
ARM64_HAS_SYSREG_GIC_CPUIF cap to ARM64_HAS_GIC_CPUIF_SYSREGS.

The 'CPUIF_SYSREGS' suffix is chosen so that this will be ordered ahed
of other ARM64_HAS_GIC_* definitions in subsequent patches.

The cpucaps file was hand-modified; all other changes were scripted
with:

  find . -type f -name '*.[chS]' -print0 | \
    xargs -0 sed -i
    's/ARM64_HAS_SYSREG_GIC_CPUIF/ARM64_HAS_GIC_CPUIF_SYSREGS/'

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/cpufeature.c | 2 +-
 arch/arm64/tools/cpucaps       | 2 +-
 drivers/irqchip/irq-gic.c      | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a77315b338e61..ad2a1f5503f33 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2142,7 +2142,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	},
 	{
 		.desc = "GIC system register CPU interface",
-		.capability = ARM64_HAS_SYSREG_GIC_CPUIF,
+		.capability = ARM64_HAS_GIC_CPUIF_SYSREGS,
 		.type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE,
 		.matches = has_useable_gicv3_cpuif,
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index a86ee376920a0..373eb148498e1 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -28,6 +28,7 @@ HAS_GENERIC_AUTH
 HAS_GENERIC_AUTH_ARCH_QARMA3
 HAS_GENERIC_AUTH_ARCH_QARMA5
 HAS_GENERIC_AUTH_IMP_DEF
+HAS_GIC_CPUIF_SYSREGS
 HAS_IRQ_PRIO_MASKING
 HAS_LDAPR
 HAS_LSE_ATOMICS
@@ -38,7 +39,6 @@ HAS_RAS_EXTN
 HAS_RNG
 HAS_SB
 HAS_STAGE2_FWB
-HAS_SYSREG_GIC_CPUIF
 HAS_TIDCP1
 HAS_TLB_RANGE
 HAS_VIRT_HOST_EXTN
diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index 210bc2f4d5550..6ae697a3800d5 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -54,7 +54,7 @@
 
 static void gic_check_cpu_features(void)
 {
-	WARN_TAINT_ONCE(this_cpu_has_cap(ARM64_HAS_SYSREG_GIC_CPUIF),
+	WARN_TAINT_ONCE(this_cpu_has_cap(ARM64_HAS_GIC_CPUIF_SYSREGS),
 			TAINT_CPU_OUT_OF_SPEC,
 			"GICv3 system registers enabled, broken firmware!\n");
 }
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 2/5] arm64: rename ARM64_HAS_IRQ_PRIO_MASKING to ARM64_HAS_GIC_PRIO_MASKING
  2023-01-25 16:38 [PATCH v2 0/5] arm64: pseudo-nmi: elide code when CONFIG_ARM64_PSEUDO_NMI=n Mark Rutland
  2023-01-25 16:38 ` [PATCH v2 1/5] arm64: rename ARM64_HAS_SYSREG_GIC_CPUIF to ARM64_HAS_GIC_CPUIF_SYSREGS Mark Rutland
@ 2023-01-25 16:38 ` Mark Rutland
  2023-01-25 16:38 ` [PATCH v2 3/5] arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_PRIO_MASKING Mark Rutland
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Mark Rutland @ 2023-01-25 16:38 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: broonie, catalin.marinas, mark.rutland, maz, will

Subsequent patches will add more GIC-related cpucaps. When we do so, it
would be nice to give them a consistent HAS_GIC_* prefix.

In preparation for doing so, this patch renames the existing
ARM64_HAS_IRQ_PRIO_MASKING cap to ARM64_HAS_GIC_PRIO_MASKING.

The cpucaps file was hand-modified; all other changes were scripted
with:

  find . -type f -name '*.[chS]' -print0 | \
    xargs -0 sed -i 's/ARM64_HAS_IRQ_PRIO_MASKING/ARM64_HAS_GIC_PRIO_MASKING/'

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cpufeature.h |  2 +-
 arch/arm64/include/asm/irqflags.h   | 10 +++++-----
 arch/arm64/include/asm/ptrace.h     |  2 +-
 arch/arm64/kernel/cpufeature.c      |  2 +-
 arch/arm64/kernel/entry.S           |  4 ++--
 arch/arm64/tools/cpucaps            |  2 +-
 6 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 03d1c9d7af821..c50928398e4ba 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -806,7 +806,7 @@ static inline bool system_has_full_ptr_auth(void)
 static __always_inline bool system_uses_irq_prio_masking(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) &&
-	       cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
+	       cpus_have_const_cap(ARM64_HAS_GIC_PRIO_MASKING);
 }
 
 static inline bool system_supports_mte(void)
diff --git a/arch/arm64/include/asm/irqflags.h b/arch/arm64/include/asm/irqflags.h
index b57b9b1e43448..f51653fb90e43 100644
--- a/arch/arm64/include/asm/irqflags.h
+++ b/arch/arm64/include/asm/irqflags.h
@@ -35,7 +35,7 @@ static inline void arch_local_irq_enable(void)
 	asm volatile(ALTERNATIVE(
 		"msr	daifclr, #3		// arch_local_irq_enable",
 		__msr_s(SYS_ICC_PMR_EL1, "%0"),
-		ARM64_HAS_IRQ_PRIO_MASKING)
+		ARM64_HAS_GIC_PRIO_MASKING)
 		:
 		: "r" ((unsigned long) GIC_PRIO_IRQON)
 		: "memory");
@@ -54,7 +54,7 @@ static inline void arch_local_irq_disable(void)
 	asm volatile(ALTERNATIVE(
 		"msr	daifset, #3		// arch_local_irq_disable",
 		__msr_s(SYS_ICC_PMR_EL1, "%0"),
-		ARM64_HAS_IRQ_PRIO_MASKING)
+		ARM64_HAS_GIC_PRIO_MASKING)
 		:
 		: "r" ((unsigned long) GIC_PRIO_IRQOFF)
 		: "memory");
@@ -70,7 +70,7 @@ static inline unsigned long arch_local_save_flags(void)
 	asm volatile(ALTERNATIVE(
 		"mrs	%0, daif",
 		__mrs_s("%0", SYS_ICC_PMR_EL1),
-		ARM64_HAS_IRQ_PRIO_MASKING)
+		ARM64_HAS_GIC_PRIO_MASKING)
 		: "=&r" (flags)
 		:
 		: "memory");
@@ -85,7 +85,7 @@ static inline int arch_irqs_disabled_flags(unsigned long flags)
 	asm volatile(ALTERNATIVE(
 		"and	%w0, %w1, #" __stringify(PSR_I_BIT),
 		"eor	%w0, %w1, #" __stringify(GIC_PRIO_IRQON),
-		ARM64_HAS_IRQ_PRIO_MASKING)
+		ARM64_HAS_GIC_PRIO_MASKING)
 		: "=&r" (res)
 		: "r" ((int) flags)
 		: "memory");
@@ -122,7 +122,7 @@ static inline void arch_local_irq_restore(unsigned long flags)
 	asm volatile(ALTERNATIVE(
 		"msr	daif, %0",
 		__msr_s(SYS_ICC_PMR_EL1, "%0"),
-		ARM64_HAS_IRQ_PRIO_MASKING)
+		ARM64_HAS_GIC_PRIO_MASKING)
 		:
 		: "r" (flags)
 		: "memory");
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 41b332c054ab8..47ec58031f11b 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -194,7 +194,7 @@ struct pt_regs {
 	u32 unused2;
 #endif
 	u64 sdei_ttbr1;
-	/* Only valid when ARM64_HAS_IRQ_PRIO_MASKING is enabled. */
+	/* Only valid when ARM64_HAS_GIC_PRIO_MASKING is enabled. */
 	u64 pmr_save;
 	u64 stackframe[2];
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index ad2a1f5503f33..afd547a5309c7 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2534,7 +2534,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		 * Depends on having GICv3
 		 */
 		.desc = "IRQ priority masking",
-		.capability = ARM64_HAS_IRQ_PRIO_MASKING,
+		.capability = ARM64_HAS_GIC_PRIO_MASKING,
 		.type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE,
 		.matches = can_use_gic_priorities,
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 11cb99c4d2987..e2d1d3d5de1db 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -312,7 +312,7 @@ alternative_else_nop_endif
 
 #ifdef CONFIG_ARM64_PSEUDO_NMI
 	/* Save pmr */
-alternative_if ARM64_HAS_IRQ_PRIO_MASKING
+alternative_if ARM64_HAS_GIC_PRIO_MASKING
 	mrs_s	x20, SYS_ICC_PMR_EL1
 	str	x20, [sp, #S_PMR_SAVE]
 	mov	x20, #GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET
@@ -337,7 +337,7 @@ alternative_else_nop_endif
 
 #ifdef CONFIG_ARM64_PSEUDO_NMI
 	/* Restore pmr */
-alternative_if ARM64_HAS_IRQ_PRIO_MASKING
+alternative_if ARM64_HAS_GIC_PRIO_MASKING
 	ldr	x20, [sp, #S_PMR_SAVE]
 	msr_s	SYS_ICC_PMR_EL1, x20
 	mrs_s	x21, SYS_ICC_CTLR_EL1
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index 373eb148498e1..c993d43624b39 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -29,7 +29,7 @@ HAS_GENERIC_AUTH_ARCH_QARMA3
 HAS_GENERIC_AUTH_ARCH_QARMA5
 HAS_GENERIC_AUTH_IMP_DEF
 HAS_GIC_CPUIF_SYSREGS
-HAS_IRQ_PRIO_MASKING
+HAS_GIC_PRIO_MASKING
 HAS_LDAPR
 HAS_LSE_ATOMICS
 HAS_NO_FPSIMD
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 3/5] arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_PRIO_MASKING
  2023-01-25 16:38 [PATCH v2 0/5] arm64: pseudo-nmi: elide code when CONFIG_ARM64_PSEUDO_NMI=n Mark Rutland
  2023-01-25 16:38 ` [PATCH v2 1/5] arm64: rename ARM64_HAS_SYSREG_GIC_CPUIF to ARM64_HAS_GIC_CPUIF_SYSREGS Mark Rutland
  2023-01-25 16:38 ` [PATCH v2 2/5] arm64: rename ARM64_HAS_IRQ_PRIO_MASKING to ARM64_HAS_GIC_PRIO_MASKING Mark Rutland
@ 2023-01-25 16:38 ` Mark Rutland
  2023-01-25 18:05   ` Alexandru Elisei
  2023-01-25 16:38 ` [PATCH v2 4/5] arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap Mark Rutland
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2023-01-25 16:38 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: broonie, catalin.marinas, mark.rutland, maz, will

Currently the arm64_cpu_capabilities structure for
ARM64_HAS_GIC_PRIO_MASKING open-codes the same CPU field definitions as
the arm64_cpu_capabilities structure for ARM64_HAS_GIC_CPUIF_SYSREGS, so
that can_use_gic_priorities() can use has_useable_gicv3_cpuif().

This duplication isn't ideal for the legibility of the code, and sets a
bad example for any ARM64_HAS_GIC_* definitions added by subsequent
patches.

Instead, have ARM64_HAS_GIC_PRIO_MASKING check for the
ARM64_HAS_GIC_CPUIF_SYSREGS cpucap, and add a comment explaining why
this is safe. Subsequent patches will use the same pattern where one
cpucap depends upon another.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/cpufeature.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index afd547a5309c7..515975f42d037 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2046,7 +2046,15 @@ early_param("irqchip.gicv3_pseudo_nmi", early_enable_pseudo_nmi);
 static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
 				   int scope)
 {
-	return enable_pseudo_nmi && has_useable_gicv3_cpuif(entry, scope);
+	/*
+	 * ARM64_HAS_GIC_CPUIF_SYSREGS has a lower index, and is a boot CPU
+	 * feature, so will be detected earlier.
+	 */
+	BUILD_BUG_ON(ARM64_HAS_GIC_PRIO_MASKING <= ARM64_HAS_GIC_CPUIF_SYSREGS);
+	if (!cpus_have_cap(ARM64_HAS_GIC_CPUIF_SYSREGS))
+		return false;
+
+	return enable_pseudo_nmi;
 }
 #endif
 
@@ -2537,11 +2545,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.capability = ARM64_HAS_GIC_PRIO_MASKING,
 		.type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE,
 		.matches = can_use_gic_priorities,
-		.sys_reg = SYS_ID_AA64PFR0_EL1,
-		.field_pos = ID_AA64PFR0_EL1_GIC_SHIFT,
-		.field_width = 4,
-		.sign = FTR_UNSIGNED,
-		.min_field_value = 1,
 	},
 #endif
 #ifdef CONFIG_ARM64_E0PD
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 4/5] arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap
  2023-01-25 16:38 [PATCH v2 0/5] arm64: pseudo-nmi: elide code when CONFIG_ARM64_PSEUDO_NMI=n Mark Rutland
                   ` (2 preceding siblings ...)
  2023-01-25 16:38 ` [PATCH v2 3/5] arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_PRIO_MASKING Mark Rutland
@ 2023-01-25 16:38 ` Mark Rutland
  2023-01-26  8:31   ` Marc Zyngier
  2023-01-25 16:38 ` [PATCH v2 5/5] arm64: irqflags: use alternative branches for pseudo-NMI logic Mark Rutland
  2023-01-26  8:51 ` [PATCH v2 0/5] arm64: pseudo-nmi: elide code when CONFIG_ARM64_PSEUDO_NMI=n Marc Zyngier
  5 siblings, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2023-01-25 16:38 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: broonie, catalin.marinas, mark.rutland, maz, will

When Priority Mask Hint Enable (PMHE) == 0b1, the GIC may use the PMR
value to determine whether to signal an IRQ to a PE, and consequently
after a change to the PMR value, a DSB SY may be required to ensure that
interrupts are signalled to a CPU in finite time. When PMHE == 0b0,
interrupts are always signalled to the relevant PE, and all masking
occurs locally, without requiring a DSB SY.

Since commit:

  f226650494c6aa87 ("arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear")

... we handle this dynamically: in most cases a static key is used to
determine whether to issue a DSB SY, but the entry code must read from
ICC_CTLR_EL1 as static keys aren't accessible from plain assembly.

It would be much nicer to use an alternative instruction sequence for
the DSB, as this would avoid the need to read from ICC_CTLR_EL1 in the
entry code, and for most other code this will result in simpler code
generation with fewer instructions and fewer branches.

This patch adds a new ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap which is
only set when ICC_CTLR_EL1.PMHE == 0b0 (and GIC priority masking is in
use). This allows us to replace the existing users of the
`gic_pmr_sync` static key with alternative sequences which default to a
DSB SY and are relaxed to a NOP when PMHE is not in use.

The entry assembly management of the PMR is slightly restructured to use
a branch (rather than multiple NOPs) when priority masking is not in
use. This is more in keeping with other alternatives in the entry
assembly, and permits the use of a separate alternatives for the
PMHE-dependent DSB SY (and removal of the conditional branch this
currently requires). For consistency I've adjusted both the save and
restore paths.

According to bloat-o-meter, when building defconfig +
CONFIG_ARM64_PSEUDO_NMI=y this shrinks the kernel text by ~4KiB:

| add/remove: 4/2 grow/shrink: 42/310 up/down: 332/-5032 (-4700)

The resulting vmlinux is ~66KiB smaller, though the resulting Image size
is unchanged due to padding and alignment:

| [mark@lakrids:~/src/linux]% ls -al vmlinux-*
| -rwxr-xr-x 1 mark mark 137508344 Jan 17 14:11 vmlinux-after
| -rwxr-xr-x 1 mark mark 137575440 Jan 17 13:49 vmlinux-before
| [mark@lakrids:~/src/linux]% ls -al Image-*
| -rw-r--r-- 1 mark mark 38777344 Jan 17 14:11 Image-after
| -rw-r--r-- 1 mark mark 38777344 Jan 17 13:49 Image-before

Prior to this patch we did not verify the state of ICC_CTLR_EL1.PMHE on
secondary CPUs. As of this patch this is verified by the cpufeature code
when using GIC priority masking (i.e. when using pseudo-NMIs).

Note that since commit:

  7e3a57fa6ca831fa ("arm64: Document ICC_CTLR_EL3.PMHE setting requirements")

... Documentation/arm64/booting.rst specifies:

|      - ICC_CTLR_EL3.PMHE (bit 6) must be set to the same value across
|        all CPUs the kernel is executing on, and must stay constant
|        for the lifetime of the kernel.

... so that should not adversely affect any compliant systems, and as
we'll only check for the absense of PMHE when using pseudo-NMIs, this
will only fire when such mismatch will adversely affect the system.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm/include/asm/arch_gicv3.h   |  5 +++++
 arch/arm64/include/asm/arch_gicv3.h |  5 +++++
 arch/arm64/include/asm/barrier.h    | 11 ++++++----
 arch/arm64/kernel/cpufeature.c      | 32 +++++++++++++++++++++++++++++
 arch/arm64/kernel/entry.S           | 25 ++++++++++++++--------
 arch/arm64/kernel/image-vars.h      |  2 --
 arch/arm64/tools/cpucaps            |  1 +
 drivers/irqchip/irq-gic-v3.c        | 19 +----------------
 8 files changed, 67 insertions(+), 33 deletions(-)

diff --git a/arch/arm/include/asm/arch_gicv3.h b/arch/arm/include/asm/arch_gicv3.h
index f82a819eb0dbb..311e83038bdb3 100644
--- a/arch/arm/include/asm/arch_gicv3.h
+++ b/arch/arm/include/asm/arch_gicv3.h
@@ -252,5 +252,10 @@ static inline void gic_arch_enable_irqs(void)
 	WARN_ON_ONCE(true);
 }
 
+static inline bool gic_has_relaxed_pmr_sync(void)
+{
+	return false;
+}
+
 #endif /* !__ASSEMBLY__ */
 #endif /* !__ASM_ARCH_GICV3_H */
diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
index 48d4473e8eee2..01281a5336cf8 100644
--- a/arch/arm64/include/asm/arch_gicv3.h
+++ b/arch/arm64/include/asm/arch_gicv3.h
@@ -190,5 +190,10 @@ static inline void gic_arch_enable_irqs(void)
 	asm volatile ("msr daifclr, #3" : : : "memory");
 }
 
+static inline bool gic_has_relaxed_pmr_sync(void)
+{
+	return cpus_have_cap(ARM64_HAS_GIC_PRIO_RELAXED_SYNC);
+}
+
 #endif /* __ASSEMBLY__ */
 #endif /* __ASM_ARCH_GICV3_H */
diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 2cfc4245d2e2d..3dd8982a9ce3c 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -11,6 +11,8 @@
 
 #include <linux/kasan-checks.h>
 
+#include <asm/alternative-macros.h>
+
 #define __nops(n)	".rept	" #n "\nnop\n.endr\n"
 #define nops(n)		asm volatile(__nops(n))
 
@@ -41,10 +43,11 @@
 #ifdef CONFIG_ARM64_PSEUDO_NMI
 #define pmr_sync()						\
 	do {							\
-		extern struct static_key_false gic_pmr_sync;	\
-								\
-		if (static_branch_unlikely(&gic_pmr_sync))	\
-			dsb(sy);				\
+		asm volatile(					\
+		ALTERNATIVE_CB("dsb sy",			\
+			       ARM64_HAS_GIC_PRIO_RELAXED_SYNC,	\
+			       alt_cb_patch_nops)		\
+		);						\
 	} while(0)
 #else
 #define pmr_sync()	do {} while (0)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 515975f42d037..445eb5134208c 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2056,6 +2056,30 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
 
 	return enable_pseudo_nmi;
 }
+
+static bool has_gic_prio_relaxed_sync(const struct arm64_cpu_capabilities *entry,
+			    int scope)
+{
+	/*
+	 * If we're not using priority masking then we won't be poking PMR_EL1,
+	 * and there's no need to relax synchronization of writes to it, and
+	 * ICC_CTLR_EL1 might not be accessible and we must avoid reads from
+	 * that.
+	 *
+	 * ARM64_HAS_GIC_PRIO_MASKING has a lower index, and is a boot CPU
+	 * feature, so will be detected earlier.
+	 */
+	BUILD_BUG_ON(ARM64_HAS_GIC_PRIO_RELAXED_SYNC <= ARM64_HAS_GIC_PRIO_MASKING);
+	if (!cpus_have_cap(ARM64_HAS_GIC_PRIO_MASKING))
+		return false;
+
+	/*
+	 * When Priority Mask Hint Enable (PMHE) == 0b0, PMR is not used as a
+	 * hint for interrupt distribution, a DSB is not necessary when
+	 * unmasking IRQs via PMR, and we can relax the barrier to a NOP.
+	 */
+	return (gic_read_ctlr() & ICC_CTLR_EL1_PMHE_MASK) == 0;
+}
 #endif
 
 #ifdef CONFIG_ARM64_BTI
@@ -2546,6 +2570,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE,
 		.matches = can_use_gic_priorities,
 	},
+	{
+		/*
+		 * Depends on ARM64_HAS_GIC_PRIO_MASKING
+		 */
+		.capability = ARM64_HAS_GIC_PRIO_RELAXED_SYNC,
+		.type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE,
+		.matches = has_gic_prio_relaxed_sync,
+	},
 #endif
 #ifdef CONFIG_ARM64_E0PD
 	{
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index e2d1d3d5de1db..8427cdc0cfcbc 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -311,13 +311,16 @@ alternative_else_nop_endif
 	.endif
 
 #ifdef CONFIG_ARM64_PSEUDO_NMI
-	/* Save pmr */
-alternative_if ARM64_HAS_GIC_PRIO_MASKING
+alternative_if_not ARM64_HAS_GIC_PRIO_MASKING
+	b	.Lskip_pmr_save\@
+alternative_else_nop_endif
+
 	mrs_s	x20, SYS_ICC_PMR_EL1
 	str	x20, [sp, #S_PMR_SAVE]
 	mov	x20, #GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET
 	msr_s	SYS_ICC_PMR_EL1, x20
-alternative_else_nop_endif
+
+.Lskip_pmr_save\@:
 #endif
 
 	/*
@@ -336,15 +339,19 @@ alternative_else_nop_endif
 	.endif
 
 #ifdef CONFIG_ARM64_PSEUDO_NMI
-	/* Restore pmr */
-alternative_if ARM64_HAS_GIC_PRIO_MASKING
+alternative_if_not ARM64_HAS_GIC_PRIO_MASKING
+	b	.Lskip_pmr_restore\@
+alternative_else_nop_endif
+
 	ldr	x20, [sp, #S_PMR_SAVE]
 	msr_s	SYS_ICC_PMR_EL1, x20
-	mrs_s	x21, SYS_ICC_CTLR_EL1
-	tbz	x21, #6, .L__skip_pmr_sync\@	// Check for ICC_CTLR_EL1.PMHE
-	dsb	sy				// Ensure priority change is seen by redistributor
-.L__skip_pmr_sync\@:
+
+	/* Ensure priority change is seen by redistributor */
+alternative_if_not ARM64_HAS_GIC_PRIO_RELAXED_SYNC
+	dsb	sy
 alternative_else_nop_endif
+
+.Lskip_pmr_restore\@:
 #endif
 
 	ldp	x21, x22, [sp, #S_PC]		// load ELR, SPSR
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index d0e9bb5c91fcc..97e750a35f70b 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -67,9 +67,7 @@ KVM_NVHE_ALIAS(__hyp_stub_vectors);
 KVM_NVHE_ALIAS(vgic_v2_cpuif_trap);
 KVM_NVHE_ALIAS(vgic_v3_cpuif_trap);
 
-/* Static key checked in pmr_sync(). */
 #ifdef CONFIG_ARM64_PSEUDO_NMI
-KVM_NVHE_ALIAS(gic_pmr_sync);
 /* Static key checked in GIC_PRIO_IRQOFF. */
 KVM_NVHE_ALIAS(gic_nonsecure_priorities);
 #endif
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index c993d43624b39..10ce8f88f86b7 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -30,6 +30,7 @@ HAS_GENERIC_AUTH_ARCH_QARMA5
 HAS_GENERIC_AUTH_IMP_DEF
 HAS_GIC_CPUIF_SYSREGS
 HAS_GIC_PRIO_MASKING
+HAS_GIC_PRIO_RELAXED_SYNC
 HAS_LDAPR
 HAS_LSE_ATOMICS
 HAS_NO_FPSIMD
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 997104d4338e7..3779836737c89 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -89,15 +89,6 @@ static DEFINE_STATIC_KEY_TRUE(supports_deactivate_key);
  */
 static DEFINE_STATIC_KEY_FALSE(supports_pseudo_nmis);
 
-/*
- * Global static key controlling whether an update to PMR allowing more
- * interrupts requires to be propagated to the redistributor (DSB SY).
- * And this needs to be exported for modules to be able to enable
- * interrupts...
- */
-DEFINE_STATIC_KEY_FALSE(gic_pmr_sync);
-EXPORT_SYMBOL(gic_pmr_sync);
-
 DEFINE_STATIC_KEY_FALSE(gic_nonsecure_priorities);
 EXPORT_SYMBOL(gic_nonsecure_priorities);
 
@@ -1768,16 +1759,8 @@ static void gic_enable_nmi_support(void)
 	for (i = 0; i < gic_data.ppi_nr; i++)
 		refcount_set(&ppi_nmi_refs[i], 0);
 
-	/*
-	 * Linux itself doesn't use 1:N distribution, so has no need to
-	 * set PMHE. The only reason to have it set is if EL3 requires it
-	 * (and we can't change it).
-	 */
-	if (gic_read_ctlr() & ICC_CTLR_EL1_PMHE_MASK)
-		static_branch_enable(&gic_pmr_sync);
-
 	pr_info("Pseudo-NMIs enabled using %s ICC_PMR_EL1 synchronisation\n",
-		static_branch_unlikely(&gic_pmr_sync) ? "forced" : "relaxed");
+		gic_has_relaxed_pmr_sync() ? "relaxed" : "forced");
 
 	/*
 	 * How priority values are used by the GIC depends on two things:
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 5/5] arm64: irqflags: use alternative branches for pseudo-NMI logic
  2023-01-25 16:38 [PATCH v2 0/5] arm64: pseudo-nmi: elide code when CONFIG_ARM64_PSEUDO_NMI=n Mark Rutland
                   ` (3 preceding siblings ...)
  2023-01-25 16:38 ` [PATCH v2 4/5] arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap Mark Rutland
@ 2023-01-25 16:38 ` Mark Rutland
  2023-01-26  8:49   ` Marc Zyngier
  2023-01-26  8:51 ` [PATCH v2 0/5] arm64: pseudo-nmi: elide code when CONFIG_ARM64_PSEUDO_NMI=n Marc Zyngier
  5 siblings, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2023-01-25 16:38 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: broonie, catalin.marinas, mark.rutland, maz, will

Due to the way we use alternatives in the irqflags code, even when
CONFIG_ARM64_PSEUDO_NMI=n, we generate unused alternative code for
pseudo-NMI management. This patch reworks the irqflags code to remove
the redundant code when CONFIG_ARM64_PSEUDO_NMI=n, which benefits the
more common case, and will permit further rework of our DAIF management
(e.g. in preparation for ARMv8.8-A's NMI feature).

Prior to this patch a defconfig kernel has hundreds of redundant
instructions to access ICC_PMR_EL1 (which should only need to be
manipulated in setup code), which this patch removes:

| [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-before-defconfig | grep icc_pmr_el1 | wc -l
| 885
| [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-after-defconfig | grep icc_pmr_el1 | wc -l
| 5

Those instructions alone account for more than 3KiB of kernel text, and
will be associated with additional alt_instr entries, padding and
branches, etc.

These redundant instructions exist because we use alternative sequences
for to choose between DAIF / PMR management in irqflags.h, and even when
CONFIG_ARM64_PSEUDO_NMI=n, those alternative sequences will generate the
code for PMR management, along with alt_instr entries. We use
alternatives here as this was necessary to ensure that we never
encounter a mismatched local_irq_save() ... local_irq_restore() sequence
in the middle of patching, which was possible to see if we used static
keys to choose between DAIF and PMR management.

Since commit:

  21fb26bfb01ffe0d ("arm64: alternatives: add alternative_has_feature_*()")

... we have a mechanism to use alternatives similarly to static keys,
allowing us to write the bulk of the logic in C code while also being
able to rely on all sites being patched in one go, and avoiding a
mismatched mismatched local_irq_save() ... local_irq_restore() sequence
during patching.

This patch rewrites arm64's local_irq_*() functions to use alternative
branches. This allows for the pseudo-NMI code to be entirely elided when
CONFIG_ARM64_PSEUDO_NMI=n, making a defconfig Image 64KiB smaller, and
not affectint the size of an Image with CONFIG_ARM64_PSEUDO_NMI=y:

| [mark@lakrids:~/src/linux]% ls -al vmlinux-*
| -rwxr-xr-x 1 mark mark 137473432 Jan 18 11:11 vmlinux-after-defconfig
| -rwxr-xr-x 1 mark mark 137918776 Jan 18 11:15 vmlinux-after-pnmi
| -rwxr-xr-x 1 mark mark 137380152 Jan 18 11:03 vmlinux-before-defconfig
| -rwxr-xr-x 1 mark mark 137523704 Jan 18 11:08 vmlinux-before-pnmi
| [mark@lakrids:~/src/linux]% ls -al Image-*
| -rw-r--r-- 1 mark mark 38646272 Jan 18 11:11 Image-after-defconfig
| -rw-r--r-- 1 mark mark 38777344 Jan 18 11:14 Image-after-pnmi
| -rw-r--r-- 1 mark mark 38711808 Jan 18 11:03 Image-before-defconfig
| -rw-r--r-- 1 mark mark 38777344 Jan 18 11:08 Image-before-pnmi

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/irqflags.h | 183 ++++++++++++++++++++----------
 1 file changed, 124 insertions(+), 59 deletions(-)

diff --git a/arch/arm64/include/asm/irqflags.h b/arch/arm64/include/asm/irqflags.h
index f51653fb90e43..4175ffb1a64b9 100644
--- a/arch/arm64/include/asm/irqflags.h
+++ b/arch/arm64/include/asm/irqflags.h
@@ -21,43 +21,69 @@
  * exceptions should be unmasked.
  */
 
-/*
- * CPU interrupt mask handling.
- */
-static inline void arch_local_irq_enable(void)
+static __always_inline bool __irqflags_uses_pmr(void)
 {
-	if (system_has_prio_mask_debugging()) {
-		u32 pmr = read_sysreg_s(SYS_ICC_PMR_EL1);
+	return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) &&
+	       alternative_has_feature_unlikely(ARM64_HAS_GIC_PRIO_MASKING);
+}
 
+static __always_inline void __daif_local_irq_enable(void)
+{
+	asm volatile("msr daifclr, #3" ::: "memory");
+}
+
+static __always_inline void __pmr_local_irq_enable(void)
+{
+	if (IS_ENABLED(CONFIG_ARM64_DEBUG_PRIORITY_MASKING)) {
+		u32 pmr = read_sysreg_s(SYS_ICC_PMR_EL1);
 		WARN_ON_ONCE(pmr != GIC_PRIO_IRQON && pmr != GIC_PRIO_IRQOFF);
 	}
 
-	asm volatile(ALTERNATIVE(
-		"msr	daifclr, #3		// arch_local_irq_enable",
-		__msr_s(SYS_ICC_PMR_EL1, "%0"),
-		ARM64_HAS_GIC_PRIO_MASKING)
-		:
-		: "r" ((unsigned long) GIC_PRIO_IRQON)
-		: "memory");
-
+	write_sysreg_s(GIC_PRIO_IRQON, SYS_ICC_PMR_EL1);
 	pmr_sync();
 }
 
-static inline void arch_local_irq_disable(void)
+static inline void arch_local_irq_enable(void)
 {
-	if (system_has_prio_mask_debugging()) {
-		u32 pmr = read_sysreg_s(SYS_ICC_PMR_EL1);
+	if (__irqflags_uses_pmr()) {
+		__pmr_local_irq_enable();
+	} else {
+		__daif_local_irq_enable();
+	}
+}
 
+static __always_inline void __daif_local_irq_disable(void)
+{
+	asm volatile("msr daifset, #3" ::: "memory");
+}
+
+static __always_inline void __pmr_local_irq_disable(void)
+{
+	if (IS_ENABLED(CONFIG_ARM64_DEBUG_PRIORITY_MASKING)) {
+		u32 pmr = read_sysreg_s(SYS_ICC_PMR_EL1);
 		WARN_ON_ONCE(pmr != GIC_PRIO_IRQON && pmr != GIC_PRIO_IRQOFF);
 	}
 
-	asm volatile(ALTERNATIVE(
-		"msr	daifset, #3		// arch_local_irq_disable",
-		__msr_s(SYS_ICC_PMR_EL1, "%0"),
-		ARM64_HAS_GIC_PRIO_MASKING)
-		:
-		: "r" ((unsigned long) GIC_PRIO_IRQOFF)
-		: "memory");
+	write_sysreg_s(GIC_PRIO_IRQOFF, SYS_ICC_PMR_EL1);
+}
+
+static inline void arch_local_irq_disable(void)
+{
+	if (__irqflags_uses_pmr()) {
+		__pmr_local_irq_disable();
+	} else {
+		__daif_local_irq_disable();
+	}
+}
+
+static __always_inline unsigned long __daif_local_save_flags(void)
+{
+	return read_sysreg(daif);
+}
+
+static __always_inline unsigned long __pmr_local_save_flags(void)
+{
+	return read_sysreg_s(SYS_ICC_PMR_EL1);
 }
 
 /*
@@ -65,69 +91,108 @@ static inline void arch_local_irq_disable(void)
  */
 static inline unsigned long arch_local_save_flags(void)
 {
-	unsigned long flags;
+	if (__irqflags_uses_pmr()) {
+		return __pmr_local_save_flags();
+	} else {
+		return __daif_local_save_flags();
+	}
+}
 
-	asm volatile(ALTERNATIVE(
-		"mrs	%0, daif",
-		__mrs_s("%0", SYS_ICC_PMR_EL1),
-		ARM64_HAS_GIC_PRIO_MASKING)
-		: "=&r" (flags)
-		:
-		: "memory");
+static __always_inline bool __daif_irqs_disabled_flags(unsigned long flags)
+{
+	return flags & PSR_I_BIT;
+}
 
-	return flags;
+static __always_inline bool __pmr_irqs_disabled_flags(unsigned long flags)
+{
+	return flags != GIC_PRIO_IRQON;
 }
 
-static inline int arch_irqs_disabled_flags(unsigned long flags)
+static inline bool arch_irqs_disabled_flags(unsigned long flags)
 {
-	int res;
+	if (__irqflags_uses_pmr()) {
+		return __pmr_irqs_disabled_flags(flags);
+	} else {
+		return __daif_irqs_disabled_flags(flags);
+	}
+}
 
-	asm volatile(ALTERNATIVE(
-		"and	%w0, %w1, #" __stringify(PSR_I_BIT),
-		"eor	%w0, %w1, #" __stringify(GIC_PRIO_IRQON),
-		ARM64_HAS_GIC_PRIO_MASKING)
-		: "=&r" (res)
-		: "r" ((int) flags)
-		: "memory");
+static __always_inline bool __daif_irqs_disabled(void)
+{
+	return __daif_irqs_disabled_flags(__daif_local_save_flags());
+}
 
-	return res;
+static __always_inline bool __pmr_irqs_disabled(void)
+{
+	return __pmr_irqs_disabled_flags(__pmr_local_save_flags());
 }
 
-static inline int arch_irqs_disabled(void)
+static inline bool arch_irqs_disabled(void)
 {
-	return arch_irqs_disabled_flags(arch_local_save_flags());
+	if (__irqflags_uses_pmr()) {
+		return __pmr_irqs_disabled();
+	} else {
+		return __daif_irqs_disabled();
+	}
 }
 
-static inline unsigned long arch_local_irq_save(void)
+static __always_inline unsigned long __daif_local_irq_save(void)
 {
-	unsigned long flags;
+	unsigned long flags = __daif_local_save_flags();
+
+	__daif_local_irq_disable();
+
+	return flags;
+}
 
-	flags = arch_local_save_flags();
+static __always_inline unsigned long __pmr_local_irq_save(void)
+{
+	unsigned long flags = __pmr_local_save_flags();
 
 	/*
 	 * There are too many states with IRQs disabled, just keep the current
 	 * state if interrupts are already disabled/masked.
 	 */
-	if (!arch_irqs_disabled_flags(flags))
-		arch_local_irq_disable();
+	if (!__pmr_irqs_disabled_flags(flags))
+		__pmr_local_irq_disable();
 
 	return flags;
 }
 
+static inline unsigned long arch_local_irq_save(void)
+{
+	if (__irqflags_uses_pmr()) {
+		return __pmr_local_irq_save();
+	} else {
+		return __daif_local_irq_save();
+	}
+}
+
+static __always_inline void __daif_local_irq_restore(unsigned long flags)
+{
+	barrier();
+	write_sysreg(flags, daif);
+	barrier();
+}
+
+static __always_inline void __pmr_local_irq_restore(unsigned long flags)
+{
+	barrier();
+	write_sysreg_s(flags, SYS_ICC_PMR_EL1);
+	pmr_sync();
+	barrier();
+}
+
 /*
  * restore saved IRQ state
  */
 static inline void arch_local_irq_restore(unsigned long flags)
 {
-	asm volatile(ALTERNATIVE(
-		"msr	daif, %0",
-		__msr_s(SYS_ICC_PMR_EL1, "%0"),
-		ARM64_HAS_GIC_PRIO_MASKING)
-		:
-		: "r" (flags)
-		: "memory");
-
-	pmr_sync();
+	if (__irqflags_uses_pmr()) {
+		__pmr_local_irq_restore(flags);
+	} else {
+		__daif_local_irq_restore(flags);
+	}
 }
 
 #endif /* __ASM_IRQFLAGS_H */
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/5] arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_PRIO_MASKING
  2023-01-25 16:38 ` [PATCH v2 3/5] arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_PRIO_MASKING Mark Rutland
@ 2023-01-25 18:05   ` Alexandru Elisei
  2023-01-25 18:27     ` Mark Rutland
  0 siblings, 1 reply; 13+ messages in thread
From: Alexandru Elisei @ 2023-01-25 18:05 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, broonie, catalin.marinas, maz, will

Hi,

Very low hanging fruit here, but I think you wanted the commit subject to
say that ARM64_HAS_GIC_PRIO_MASKING should depend on
ARM64_HAS_GIC_CPUIF_SYSREGS instead of depending on itself.

Thanks,
Alex

On Wed, Jan 25, 2023 at 04:38:24PM +0000, Mark Rutland wrote:
> Currently the arm64_cpu_capabilities structure for
> ARM64_HAS_GIC_PRIO_MASKING open-codes the same CPU field definitions as
> the arm64_cpu_capabilities structure for ARM64_HAS_GIC_CPUIF_SYSREGS, so
> that can_use_gic_priorities() can use has_useable_gicv3_cpuif().
> 
> This duplication isn't ideal for the legibility of the code, and sets a
> bad example for any ARM64_HAS_GIC_* definitions added by subsequent
> patches.
> 
> Instead, have ARM64_HAS_GIC_PRIO_MASKING check for the
> ARM64_HAS_GIC_CPUIF_SYSREGS cpucap, and add a comment explaining why
> this is safe. Subsequent patches will use the same pattern where one
> cpucap depends upon another.
> 
> There should be no functional change as a result of this patch.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kernel/cpufeature.c | 15 +++++++++------
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index afd547a5309c7..515975f42d037 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2046,7 +2046,15 @@ early_param("irqchip.gicv3_pseudo_nmi", early_enable_pseudo_nmi);
>  static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
>  				   int scope)
>  {
> -	return enable_pseudo_nmi && has_useable_gicv3_cpuif(entry, scope);
> +	/*
> +	 * ARM64_HAS_GIC_CPUIF_SYSREGS has a lower index, and is a boot CPU
> +	 * feature, so will be detected earlier.
> +	 */
> +	BUILD_BUG_ON(ARM64_HAS_GIC_PRIO_MASKING <= ARM64_HAS_GIC_CPUIF_SYSREGS);
> +	if (!cpus_have_cap(ARM64_HAS_GIC_CPUIF_SYSREGS))
> +		return false;
> +
> +	return enable_pseudo_nmi;
>  }
>  #endif
>  
> @@ -2537,11 +2545,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.capability = ARM64_HAS_GIC_PRIO_MASKING,
>  		.type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE,
>  		.matches = can_use_gic_priorities,
> -		.sys_reg = SYS_ID_AA64PFR0_EL1,
> -		.field_pos = ID_AA64PFR0_EL1_GIC_SHIFT,
> -		.field_width = 4,
> -		.sign = FTR_UNSIGNED,
> -		.min_field_value = 1,
>  	},
>  #endif
>  #ifdef CONFIG_ARM64_E0PD
> -- 
> 2.30.2
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/5] arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_PRIO_MASKING
  2023-01-25 18:05   ` Alexandru Elisei
@ 2023-01-25 18:27     ` Mark Rutland
  0 siblings, 0 replies; 13+ messages in thread
From: Mark Rutland @ 2023-01-25 18:27 UTC (permalink / raw)
  To: Alexandru Elisei; +Cc: linux-arm-kernel, broonie, catalin.marinas, maz, will

On Wed, Jan 25, 2023 at 06:05:09PM +0000, Alexandru Elisei wrote:
> Hi,
> 
> Very low hanging fruit here, but I think you wanted the commit subject to
> say that ARM64_HAS_GIC_PRIO_MASKING should depend on
> ARM64_HAS_GIC_CPUIF_SYSREGS instead of depending on itself.

Ugh; yes, that should have been:

  arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_CPUIF_SYSREGS

I've fixed that up locally, but I'll avoid sending a v3 unless there's
something else that needs fixed too.

Thanks,
Mark.

> 
> Thanks,
> Alex
> 
> On Wed, Jan 25, 2023 at 04:38:24PM +0000, Mark Rutland wrote:
> > Currently the arm64_cpu_capabilities structure for
> > ARM64_HAS_GIC_PRIO_MASKING open-codes the same CPU field definitions as
> > the arm64_cpu_capabilities structure for ARM64_HAS_GIC_CPUIF_SYSREGS, so
> > that can_use_gic_priorities() can use has_useable_gicv3_cpuif().
> > 
> > This duplication isn't ideal for the legibility of the code, and sets a
> > bad example for any ARM64_HAS_GIC_* definitions added by subsequent
> > patches.
> > 
> > Instead, have ARM64_HAS_GIC_PRIO_MASKING check for the
> > ARM64_HAS_GIC_CPUIF_SYSREGS cpucap, and add a comment explaining why
> > this is safe. Subsequent patches will use the same pattern where one
> > cpucap depends upon another.
> > 
> > There should be no functional change as a result of this patch.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Marc Zyngier <maz@kernel.org>
> > Cc: Mark Brown <broonie@kernel.org>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/kernel/cpufeature.c | 15 +++++++++------
> >  1 file changed, 9 insertions(+), 6 deletions(-)
> > 
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index afd547a5309c7..515975f42d037 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -2046,7 +2046,15 @@ early_param("irqchip.gicv3_pseudo_nmi", early_enable_pseudo_nmi);
> >  static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
> >  				   int scope)
> >  {
> > -	return enable_pseudo_nmi && has_useable_gicv3_cpuif(entry, scope);
> > +	/*
> > +	 * ARM64_HAS_GIC_CPUIF_SYSREGS has a lower index, and is a boot CPU
> > +	 * feature, so will be detected earlier.
> > +	 */
> > +	BUILD_BUG_ON(ARM64_HAS_GIC_PRIO_MASKING <= ARM64_HAS_GIC_CPUIF_SYSREGS);
> > +	if (!cpus_have_cap(ARM64_HAS_GIC_CPUIF_SYSREGS))
> > +		return false;
> > +
> > +	return enable_pseudo_nmi;
> >  }
> >  #endif
> >  
> > @@ -2537,11 +2545,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
> >  		.capability = ARM64_HAS_GIC_PRIO_MASKING,
> >  		.type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE,
> >  		.matches = can_use_gic_priorities,
> > -		.sys_reg = SYS_ID_AA64PFR0_EL1,
> > -		.field_pos = ID_AA64PFR0_EL1_GIC_SHIFT,
> > -		.field_width = 4,
> > -		.sign = FTR_UNSIGNED,
> > -		.min_field_value = 1,
> >  	},
> >  #endif
> >  #ifdef CONFIG_ARM64_E0PD
> > -- 
> > 2.30.2
> > 
> > 
> > _______________________________________________
> > linux-arm-kernel mailing list
> > linux-arm-kernel@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 4/5] arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap
  2023-01-25 16:38 ` [PATCH v2 4/5] arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap Mark Rutland
@ 2023-01-26  8:31   ` Marc Zyngier
  2023-01-26 10:24     ` Mark Rutland
  0 siblings, 1 reply; 13+ messages in thread
From: Marc Zyngier @ 2023-01-26  8:31 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, broonie, catalin.marinas, will

On Wed, 25 Jan 2023 16:38:25 +0000,
Mark Rutland <mark.rutland@arm.com> wrote:

[...]

> @@ -1768,16 +1759,8 @@ static void gic_enable_nmi_support(void)
>  	for (i = 0; i < gic_data.ppi_nr; i++)
>  		refcount_set(&ppi_nmi_refs[i], 0);
>  
> -	/*
> -	 * Linux itself doesn't use 1:N distribution, so has no need to
> -	 * set PMHE. The only reason to have it set is if EL3 requires it
> -	 * (and we can't change it).
> -	 */

I think this is still an important comment as it gives a rationale for
the extra synchronisation even if Linux doesn't use 1:N distribution:
If you get secure interrupts in the non-secure priority space, they
are subjected to the NS PMR setting.

Could you find a new home for it?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 5/5] arm64: irqflags: use alternative branches for pseudo-NMI logic
  2023-01-25 16:38 ` [PATCH v2 5/5] arm64: irqflags: use alternative branches for pseudo-NMI logic Mark Rutland
@ 2023-01-26  8:49   ` Marc Zyngier
  2023-01-26 10:31     ` Mark Rutland
  0 siblings, 1 reply; 13+ messages in thread
From: Marc Zyngier @ 2023-01-26  8:49 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, broonie, catalin.marinas, will

On Wed, 25 Jan 2023 16:38:26 +0000,
Mark Rutland <mark.rutland@arm.com> wrote:

[...]

> +static __always_inline void __daif_local_irq_restore(unsigned long flags)
> +{
> +	barrier();
> +	write_sysreg(flags, daif);
> +	barrier();
> +}
> +
> +static __always_inline void __pmr_local_irq_restore(unsigned long flags)
> +{
> +	barrier();
> +	write_sysreg_s(flags, SYS_ICC_PMR_EL1);
> +	pmr_sync();
> +	barrier();
> +}

It would be good to at least mention why we need these compile-time
barriers which are not that explicit in the existing code. I guess
they are equivalent to the "memory" clobber in the asm sequences that
this replaces, but they don't seem to be used consistently throughout
this patch.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 0/5] arm64: pseudo-nmi: elide code when CONFIG_ARM64_PSEUDO_NMI=n
  2023-01-25 16:38 [PATCH v2 0/5] arm64: pseudo-nmi: elide code when CONFIG_ARM64_PSEUDO_NMI=n Mark Rutland
                   ` (4 preceding siblings ...)
  2023-01-25 16:38 ` [PATCH v2 5/5] arm64: irqflags: use alternative branches for pseudo-NMI logic Mark Rutland
@ 2023-01-26  8:51 ` Marc Zyngier
  5 siblings, 0 replies; 13+ messages in thread
From: Marc Zyngier @ 2023-01-26  8:51 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, broonie, catalin.marinas, will

On Wed, 25 Jan 2023 16:38:21 +0000,
Mark Rutland <mark.rutland@arm.com> wrote:
> 
> This series addresses a couple of sub-optimal code generation issues with
> arm64's pseudo-nmi support code:
> 
> * Even when CONFIG_ARM64_PSEUDO_NMI=n, we generate alternative code
>   sequences and alt_instr entries which will never be used. This series
>   reworks the irqflags code to use alternative branches (with an
>   IS_ENABLED() check), which allows the alternatives to be elided when
>   CONFIG_ARM64_PSEUDO_NMI=n.
> 
> * When PMHE is eanbled in HW, we must synchronize PMR updates using a
>   DSB SY. We take pains to avoid this using a static key to skip the
>   barrier when PMHE is not in use, but this results in unnecessarily
>   branchy code. This series replaces the static key with an alternative,
>   allowing the DSB SY to be relaxed to a NOP.
> 
> These changes make a defconfig kernel a little smaller, and does not
> adversely affect the size of a CONFIG_ARM64_PSEUDO_NMI=y kernel. The
> structural changes will also make it easier for a subsequent series to
> rework the irqflag and daifflag management, addressing some
> long-standing edge cases and preparing for ARMv8.8-A's FEAT_NMI.
> 
> I've tested this series under a QEM KVM VM on a ThunderX2 host, and a
> QEMU TCG VM on an x86_64 host. I've tested with and without pseudo-NMI
> support enabled, and with pseudo-NMI debug and lockdep enabled, using
> perf record in system-wide mode.

With the couple of nits I mentioned on individual patches addressed:

Reviewed-by: Marc Zyngier <maz@kernel.org>

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 4/5] arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap
  2023-01-26  8:31   ` Marc Zyngier
@ 2023-01-26 10:24     ` Mark Rutland
  0 siblings, 0 replies; 13+ messages in thread
From: Mark Rutland @ 2023-01-26 10:24 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: linux-arm-kernel, broonie, catalin.marinas, will

On Thu, Jan 26, 2023 at 08:31:29AM +0000, Marc Zyngier wrote:
> On Wed, 25 Jan 2023 16:38:25 +0000,
> Mark Rutland <mark.rutland@arm.com> wrote:
> 
> [...]
> 
> > @@ -1768,16 +1759,8 @@ static void gic_enable_nmi_support(void)
> >  	for (i = 0; i < gic_data.ppi_nr; i++)
> >  		refcount_set(&ppi_nmi_refs[i], 0);
> >  
> > -	/*
> > -	 * Linux itself doesn't use 1:N distribution, so has no need to
> > -	 * set PMHE. The only reason to have it set is if EL3 requires it
> > -	 * (and we can't change it).
> > -	 */
> 
> I think this is still an important comment as it gives a rationale for
> the extra synchronisation even if Linux doesn't use 1:N distribution:
> If you get secure interrupts in the non-secure priority space, they
> are subjected to the NS PMR setting.
> 
> Could you find a new home for it?

Sure; I'll add it verbatim to the end of the comment block when we detect the
cpucap, i.e.

| static bool has_gic_prio_relaxed_sync(const struct arm64_cpu_capabilities *entry,
|                                       int scope)
| {
|         /*   
|          * If we're not using priority masking then we won't be poking PMR_EL1,
|          * and there's no need to relax synchronization of writes to it, and
|          * ICC_CTLR_EL1 might not be accessible and we must avoid reads from
|          * that.
|          *
|          * ARM64_HAS_GIC_PRIO_MASKING has a lower index, and is a boot CPU
|          * feature, so will be detected earlier.
|          */
|         BUILD_BUG_ON(ARM64_HAS_GIC_PRIO_RELAXED_SYNC <= ARM64_HAS_GIC_PRIO_MASKING);
|         if (!cpus_have_cap(ARM64_HAS_GIC_PRIO_MASKING))
|                 return false;
| 
|         /*   
|          * When Priority Mask Hint Enable (PMHE) == 0b0, PMR is not used as a
|          * hint for interrupt distribution, a DSB is not necessary when
|          * unmasking IRQs via PMR, and we can relax the barrier to a NOP.
|          *
|          * Linux itself doesn't use 1:N distribution, so has no need to
|          * set PMHE. The only reason to have it set is if EL3 requires it
|          * (and we can't change it).
|          */
|         return (gic_read_ctlr() & ICC_CTLR_EL1_PMHE_MASK) == 0;
| }

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 5/5] arm64: irqflags: use alternative branches for pseudo-NMI logic
  2023-01-26  8:49   ` Marc Zyngier
@ 2023-01-26 10:31     ` Mark Rutland
  0 siblings, 0 replies; 13+ messages in thread
From: Mark Rutland @ 2023-01-26 10:31 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: linux-arm-kernel, broonie, catalin.marinas, will

On Thu, Jan 26, 2023 at 08:49:38AM +0000, Marc Zyngier wrote:
> On Wed, 25 Jan 2023 16:38:26 +0000,
> Mark Rutland <mark.rutland@arm.com> wrote:
> 
> [...]
> 
> > +static __always_inline void __daif_local_irq_restore(unsigned long flags)
> > +{
> > +	barrier();
> > +	write_sysreg(flags, daif);
> > +	barrier();
> > +}
> > +
> > +static __always_inline void __pmr_local_irq_restore(unsigned long flags)
> > +{
> > +	barrier();
> > +	write_sysreg_s(flags, SYS_ICC_PMR_EL1);
> > +	pmr_sync();
> > +	barrier();
> > +}
> 
> It would be good to at least mention why we need these compile-time
> barriers which are not that explicit in the existing code. I guess
> they are equivalent to the "memory" clobber in the asm sequences that
> this replaces, 

Yes; that was the idea.

> but they don't seem to be used consistently throughout this patch.

Yes; looking with fresh eyes, the missing barriers around PMR manipulation are
buggy. Thanks for pointing that out!

I'll go and add the missing barriers, and update the commit message
accordingly.

I'll also use barrier() around the DAIF helpers rather than a memory clobber so
that the equivalent DAIF and PMR helpers more clearly correspond to one
another.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-01-26 10:32 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-25 16:38 [PATCH v2 0/5] arm64: pseudo-nmi: elide code when CONFIG_ARM64_PSEUDO_NMI=n Mark Rutland
2023-01-25 16:38 ` [PATCH v2 1/5] arm64: rename ARM64_HAS_SYSREG_GIC_CPUIF to ARM64_HAS_GIC_CPUIF_SYSREGS Mark Rutland
2023-01-25 16:38 ` [PATCH v2 2/5] arm64: rename ARM64_HAS_IRQ_PRIO_MASKING to ARM64_HAS_GIC_PRIO_MASKING Mark Rutland
2023-01-25 16:38 ` [PATCH v2 3/5] arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_PRIO_MASKING Mark Rutland
2023-01-25 18:05   ` Alexandru Elisei
2023-01-25 18:27     ` Mark Rutland
2023-01-25 16:38 ` [PATCH v2 4/5] arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap Mark Rutland
2023-01-26  8:31   ` Marc Zyngier
2023-01-26 10:24     ` Mark Rutland
2023-01-25 16:38 ` [PATCH v2 5/5] arm64: irqflags: use alternative branches for pseudo-NMI logic Mark Rutland
2023-01-26  8:49   ` Marc Zyngier
2023-01-26 10:31     ` Mark Rutland
2023-01-26  8:51 ` [PATCH v2 0/5] arm64: pseudo-nmi: elide code when CONFIG_ARM64_PSEUDO_NMI=n Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.