All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/14] arm64/nmi: Support for FEAT_NMI
@ 2022-11-12 15:16 ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:16 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

This series enables the architecture and GIC support for the arm64
FEAT_NMI and FEAT_GICv3_NMI extensions in host kernels. These introduce
support for a new category of interrupts in the architecture code which
we can use to provide NMI like functionality, though the interrupts are
in fact maskable as the name would not imply. The GIC support was done
by Loreozo Pieralisi.

There are two modes for using this FEAT_NMI, the one we use is the one
where SCTLR_EL1.SPINTMASK is set which means that any entry to ELn
causes all interrupts including those with superpriority to be masked by
a new mask bit PSTATE.ALLINT on entry to ELn until the mask is
explicitly removed by software. PSTATE.ALLINT can be managed by software
using the new register control ALLINT.ALLINT.  Independent controls are
provided for this feature at each EL, usage at EL1 should not disrupt
EL2 or EL3.

To simplify integration we manage masking for superpriority interrupts
along with our masking for DAIF, much as is done for psedo NMIs. This
means that superpriority interrupts are unmasked whenever DAIF.A is
unmasked. This should ensure that no additional code can be preempted
when using the architected feature. The separate mask in the architected
feature means that we require management of this in the assembly code as
well as C code, masking DAIF is not sufficient to mask superpriority
interrupts.

In order to ensure that we do not have both pseudo NMIs and architected
NMIs simultaneously enabled we disable the architected NMIs if pseudo
NMI support is enabled in the kernel and has been requested on the
command line. This avoids any potential confusion or conflict between
the two mechanisms. Since pseudo NMIs require explicit enablement it
seemed most sensible to trust that the user preferred them for some
reason. A feature override is also provided for FEAT_NMI, allowing it to
be directly disabled in case of problems.

Using this feature in KVM guests will require the implementation of vGIC
support which is not present in this series, this is due to my and
Lorenzo's schedules not lining up perfectly and wanting to get the
review of the architecture side started.  The architecture code should
be fine when running in guests but doesn't accomplish anything and can't
be meaningfully tested without an interrupt controller.  As a result the
feature is not exposed to guests and we enable traps for writes to
ALLINT when running guests, detecting any guests that attempt to use the
feature.  The vGIC support should follow soon.  There is no other usage
of the feature in the hypervisor.  

v2:
 - Change approach to mask NMIs along with DAIF, masking whenever
   asynchronous exceptins are masked.
 - Trap writes to ALLINT while running KVM guests.

Lorenzo Pieralisi (1):
  irqchip/gic-v3: Implement FEAT_GICv3_NMI support

Mark Brown (13):
  arm64/booting: Document boot requirements for FEAT_NMI
  arm64/sysreg: Add definition for ICC_NMIAR1_EL1
  arm64/sysreg: Add definition of ISR_EL1
  arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
  arm64/asm: Introduce assembly macros for managing ALLINT
  arm64/hyp-stub: Enable access to ALLINT
  arm64/idreg: Add an override for FEAT_NMI
  arm64/cpufeature: Detect PE support for FEAT_NMI
  KVM: arm64: Hide FEAT_NMI from guests
  arm64/nmi: Manage masking for superpriority interrupts along with DAIF
  arm64/irq: Document handling of FEAT_NMI in irqflags.h
  arm64/nmi: Add handling of superpriority interrupts as NMIs
  arm64/nmi: Add Kconfig for NMI

 Documentation/arm64/booting.rst         |   6 +
 arch/arm64/Kconfig                      |  17 +++
 arch/arm64/include/asm/assembler.h      |  27 +++++
 arch/arm64/include/asm/cpufeature.h     |   6 +
 arch/arm64/include/asm/daifflags.h      |  19 ++++
 arch/arm64/include/asm/irq.h            |   2 +
 arch/arm64/include/asm/irqflags.h       |  10 ++
 arch/arm64/include/asm/nmi.h            |  18 +++
 arch/arm64/include/asm/sysreg.h         |   2 +
 arch/arm64/kernel/cpufeature.c          |  55 ++++++++-
 arch/arm64/kernel/entry-common.c        |  55 +++++++--
 arch/arm64/kernel/hyp-stub.S            |  12 ++
 arch/arm64/kernel/idreg-override.c      |   1 +
 arch/arm64/kernel/irq.c                 |  32 ++++++
 arch/arm64/kvm/hyp/include/hyp/switch.h |   6 +
 arch/arm64/kvm/sys_regs.c               |   1 +
 arch/arm64/tools/cpucaps                |   1 +
 arch/arm64/tools/sysreg                 |  15 +++
 drivers/irqchip/irq-gic-v3.c            | 143 ++++++++++++++++++++----
 include/linux/irqchip/arm-gic-v3.h      |   4 +
 20 files changed, 401 insertions(+), 31 deletions(-)
 create mode 100644 arch/arm64/include/asm/nmi.h


base-commit: 30a0b95b1335e12efef89dd78518ed3e4a71a763
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* [PATCH v2 00/14] arm64/nmi: Support for FEAT_NMI
@ 2022-11-12 15:16 ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:16 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

This series enables the architecture and GIC support for the arm64
FEAT_NMI and FEAT_GICv3_NMI extensions in host kernels. These introduce
support for a new category of interrupts in the architecture code which
we can use to provide NMI like functionality, though the interrupts are
in fact maskable as the name would not imply. The GIC support was done
by Loreozo Pieralisi.

There are two modes for using this FEAT_NMI, the one we use is the one
where SCTLR_EL1.SPINTMASK is set which means that any entry to ELn
causes all interrupts including those with superpriority to be masked by
a new mask bit PSTATE.ALLINT on entry to ELn until the mask is
explicitly removed by software. PSTATE.ALLINT can be managed by software
using the new register control ALLINT.ALLINT.  Independent controls are
provided for this feature at each EL, usage at EL1 should not disrupt
EL2 or EL3.

To simplify integration we manage masking for superpriority interrupts
along with our masking for DAIF, much as is done for psedo NMIs. This
means that superpriority interrupts are unmasked whenever DAIF.A is
unmasked. This should ensure that no additional code can be preempted
when using the architected feature. The separate mask in the architected
feature means that we require management of this in the assembly code as
well as C code, masking DAIF is not sufficient to mask superpriority
interrupts.

In order to ensure that we do not have both pseudo NMIs and architected
NMIs simultaneously enabled we disable the architected NMIs if pseudo
NMI support is enabled in the kernel and has been requested on the
command line. This avoids any potential confusion or conflict between
the two mechanisms. Since pseudo NMIs require explicit enablement it
seemed most sensible to trust that the user preferred them for some
reason. A feature override is also provided for FEAT_NMI, allowing it to
be directly disabled in case of problems.

Using this feature in KVM guests will require the implementation of vGIC
support which is not present in this series, this is due to my and
Lorenzo's schedules not lining up perfectly and wanting to get the
review of the architecture side started.  The architecture code should
be fine when running in guests but doesn't accomplish anything and can't
be meaningfully tested without an interrupt controller.  As a result the
feature is not exposed to guests and we enable traps for writes to
ALLINT when running guests, detecting any guests that attempt to use the
feature.  The vGIC support should follow soon.  There is no other usage
of the feature in the hypervisor.  

v2:
 - Change approach to mask NMIs along with DAIF, masking whenever
   asynchronous exceptins are masked.
 - Trap writes to ALLINT while running KVM guests.

Lorenzo Pieralisi (1):
  irqchip/gic-v3: Implement FEAT_GICv3_NMI support

Mark Brown (13):
  arm64/booting: Document boot requirements for FEAT_NMI
  arm64/sysreg: Add definition for ICC_NMIAR1_EL1
  arm64/sysreg: Add definition of ISR_EL1
  arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
  arm64/asm: Introduce assembly macros for managing ALLINT
  arm64/hyp-stub: Enable access to ALLINT
  arm64/idreg: Add an override for FEAT_NMI
  arm64/cpufeature: Detect PE support for FEAT_NMI
  KVM: arm64: Hide FEAT_NMI from guests
  arm64/nmi: Manage masking for superpriority interrupts along with DAIF
  arm64/irq: Document handling of FEAT_NMI in irqflags.h
  arm64/nmi: Add handling of superpriority interrupts as NMIs
  arm64/nmi: Add Kconfig for NMI

 Documentation/arm64/booting.rst         |   6 +
 arch/arm64/Kconfig                      |  17 +++
 arch/arm64/include/asm/assembler.h      |  27 +++++
 arch/arm64/include/asm/cpufeature.h     |   6 +
 arch/arm64/include/asm/daifflags.h      |  19 ++++
 arch/arm64/include/asm/irq.h            |   2 +
 arch/arm64/include/asm/irqflags.h       |  10 ++
 arch/arm64/include/asm/nmi.h            |  18 +++
 arch/arm64/include/asm/sysreg.h         |   2 +
 arch/arm64/kernel/cpufeature.c          |  55 ++++++++-
 arch/arm64/kernel/entry-common.c        |  55 +++++++--
 arch/arm64/kernel/hyp-stub.S            |  12 ++
 arch/arm64/kernel/idreg-override.c      |   1 +
 arch/arm64/kernel/irq.c                 |  32 ++++++
 arch/arm64/kvm/hyp/include/hyp/switch.h |   6 +
 arch/arm64/kvm/sys_regs.c               |   1 +
 arch/arm64/tools/cpucaps                |   1 +
 arch/arm64/tools/sysreg                 |  15 +++
 drivers/irqchip/irq-gic-v3.c            | 143 ++++++++++++++++++++----
 include/linux/irqchip/arm-gic-v3.h      |   4 +
 20 files changed, 401 insertions(+), 31 deletions(-)
 create mode 100644 arch/arm64/include/asm/nmi.h


base-commit: 30a0b95b1335e12efef89dd78518ed3e4a71a763
-- 
2.30.2


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [PATCH v2 01/14] arm64/booting: Document boot requirements for FEAT_NMI
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:16   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:16 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

In order to use FEAT_NMI we must be able to use ALLINT, require that it
behave as though not trapped when it is present.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 Documentation/arm64/booting.rst | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/Documentation/arm64/booting.rst b/Documentation/arm64/booting.rst
index 8aefa1001ae5..77d037bc7bf3 100644
--- a/Documentation/arm64/booting.rst
+++ b/Documentation/arm64/booting.rst
@@ -360,6 +360,12 @@ Before jumping into the kernel, the following conditions must be met:
 
     - HCR_EL2.ATA (bit 56) must be initialised to 0b1.
 
+ For CPUs with Non-maskable Interrupts (FEAT_NMI):
+
+ - If the kernel is entered at EL1 and EL2 is present:
+
+   - HCRX_EL2.TALLINT must be initialised to 0b0.
+
 The requirements described above for CPU mode, caches, MMUs, architected
 timers, coherency and system registers apply to all CPUs.  All CPUs must
 enter the kernel in the same exception level.  Where the values documented
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 01/14] arm64/booting: Document boot requirements for FEAT_NMI
@ 2022-11-12 15:16   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:16 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

In order to use FEAT_NMI we must be able to use ALLINT, require that it
behave as though not trapped when it is present.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 Documentation/arm64/booting.rst | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/Documentation/arm64/booting.rst b/Documentation/arm64/booting.rst
index 8aefa1001ae5..77d037bc7bf3 100644
--- a/Documentation/arm64/booting.rst
+++ b/Documentation/arm64/booting.rst
@@ -360,6 +360,12 @@ Before jumping into the kernel, the following conditions must be met:
 
     - HCR_EL2.ATA (bit 56) must be initialised to 0b1.
 
+ For CPUs with Non-maskable Interrupts (FEAT_NMI):
+
+ - If the kernel is entered at EL1 and EL2 is present:
+
+   - HCRX_EL2.TALLINT must be initialised to 0b0.
+
 The requirements described above for CPU mode, caches, MMUs, architected
 timers, coherency and system registers apply to all CPUs.  All CPUs must
 enter the kernel in the same exception level.  Where the values documented
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 02/14] arm64/sysreg: Add definition for ICC_NMIAR1_EL1
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:16   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:16 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

FEAT_NMI adds a new interrupt status register for NMIs, ICC_NMIAR1_EL1.
Add the definition for this register as per IHI0069H.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/tools/sysreg | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 384757a7eda9..5d0d2498c635 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -1078,3 +1078,8 @@ Field	23:16	LD
 Res0	15:8
 Field	7:0	LR
 EndSysreg
+
+Sysreg	ICC_NMIAR1_EL1	3	0	12	9	5
+Res0	63:24
+Field	23:0	INTID
+EndSysreg
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 02/14] arm64/sysreg: Add definition for ICC_NMIAR1_EL1
@ 2022-11-12 15:16   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:16 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

FEAT_NMI adds a new interrupt status register for NMIs, ICC_NMIAR1_EL1.
Add the definition for this register as per IHI0069H.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/tools/sysreg | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 384757a7eda9..5d0d2498c635 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -1078,3 +1078,8 @@ Field	23:16	LD
 Res0	15:8
 Field	7:0	LR
 EndSysreg
+
+Sysreg	ICC_NMIAR1_EL1	3	0	12	9	5
+Res0	63:24
+Field	23:0	INTID
+EndSysreg
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 03/14] arm64/sysreg: Add definition of ISR_EL1
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:16   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:16 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

Add a definition of ISR_EL1 as per DDI0487I.a. This register was not
previously defined in sysreg.h, no functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/tools/sysreg | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 5d0d2498c635..3660e680b7f5 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -1079,6 +1079,16 @@ Res0	15:8
 Field	7:0	LR
 EndSysreg
 
+Sysreg	ISR_EL1	3	0	12	1	0
+Res0	63:11
+Field	10	IS
+Field	9	FS
+Field	8	A
+Field	7	I
+Field	6	F
+Res0	5:0
+EndSysreg
+
 Sysreg	ICC_NMIAR1_EL1	3	0	12	9	5
 Res0	63:24
 Field	23:0	INTID
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 03/14] arm64/sysreg: Add definition of ISR_EL1
@ 2022-11-12 15:16   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:16 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

Add a definition of ISR_EL1 as per DDI0487I.a. This register was not
previously defined in sysreg.h, no functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/tools/sysreg | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 5d0d2498c635..3660e680b7f5 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -1079,6 +1079,16 @@ Res0	15:8
 Field	7:0	LR
 EndSysreg
 
+Sysreg	ISR_EL1	3	0	12	1	0
+Res0	63:11
+Field	10	IS
+Field	9	FS
+Field	8	A
+Field	7	I
+Field	6	F
+Res0	5:0
+EndSysreg
+
 Sysreg	ICC_NMIAR1_EL1	3	0	12	9	5
 Res0	63:24
 Field	23:0	INTID
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 04/14] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:16   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:16 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

Encodings are provided for ALLINT which allow setting of ALLINT.ALLINT
using an immediate rather than requiring that a register be loaded with
the value to write. Since these don't currently fit within the scheme we
have for sysreg generation add manual encodings like we currently do for
other similar registers such as SVCR.

Since it is required that these immediate versions be encoded with xzr
as the source register provide asm wrapper which ensure this is the
case.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/daifflags.h |  1 +
 arch/arm64/include/asm/nmi.h       | 18 ++++++++++++++++++
 arch/arm64/include/asm/sysreg.h    |  2 ++
 3 files changed, 21 insertions(+)
 create mode 100644 arch/arm64/include/asm/nmi.h

diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index 55f57dfa8e2f..b3bed2004342 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -141,4 +141,5 @@ static inline void local_daif_inherit(struct pt_regs *regs)
 	 */
 	write_sysreg(flags, daif);
 }
+
 #endif
diff --git a/arch/arm64/include/asm/nmi.h b/arch/arm64/include/asm/nmi.h
new file mode 100644
index 000000000000..067e2554e144
--- /dev/null
+++ b/arch/arm64/include/asm/nmi.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2022 ARM Ltd.
+ */
+#ifndef __ASM_NMI_H
+#define __ASM_NMI_H
+
+static __always_inline void _allint_clear(void)
+{
+	asm volatile(__msr_s(SYS_ALLINT_CLR, "xzr"));
+}
+
+static __always_inline void _allint_set(void)
+{
+	asm volatile(__msr_s(SYS_ALLINT_SET, "xzr"));
+}
+
+#endif
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 7d301700d1a9..0c07b740c750 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -126,6 +126,8 @@
  * System registers, organised loosely by encoding but grouped together
  * where the architected name contains an index. e.g. ID_MMFR<n>_EL1.
  */
+#define SYS_ALLINT_CLR			sys_reg(0, 1, 4, 0, 0)
+#define SYS_ALLINT_SET			sys_reg(0, 1, 4, 1, 0)
 #define SYS_SVCR_SMSTOP_SM_EL0		sys_reg(0, 3, 4, 2, 3)
 #define SYS_SVCR_SMSTART_SM_EL0		sys_reg(0, 3, 4, 3, 3)
 #define SYS_SVCR_SMSTOP_SMZA_EL0	sys_reg(0, 3, 4, 6, 3)
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 04/14] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
@ 2022-11-12 15:16   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:16 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

Encodings are provided for ALLINT which allow setting of ALLINT.ALLINT
using an immediate rather than requiring that a register be loaded with
the value to write. Since these don't currently fit within the scheme we
have for sysreg generation add manual encodings like we currently do for
other similar registers such as SVCR.

Since it is required that these immediate versions be encoded with xzr
as the source register provide asm wrapper which ensure this is the
case.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/daifflags.h |  1 +
 arch/arm64/include/asm/nmi.h       | 18 ++++++++++++++++++
 arch/arm64/include/asm/sysreg.h    |  2 ++
 3 files changed, 21 insertions(+)
 create mode 100644 arch/arm64/include/asm/nmi.h

diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index 55f57dfa8e2f..b3bed2004342 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -141,4 +141,5 @@ static inline void local_daif_inherit(struct pt_regs *regs)
 	 */
 	write_sysreg(flags, daif);
 }
+
 #endif
diff --git a/arch/arm64/include/asm/nmi.h b/arch/arm64/include/asm/nmi.h
new file mode 100644
index 000000000000..067e2554e144
--- /dev/null
+++ b/arch/arm64/include/asm/nmi.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2022 ARM Ltd.
+ */
+#ifndef __ASM_NMI_H
+#define __ASM_NMI_H
+
+static __always_inline void _allint_clear(void)
+{
+	asm volatile(__msr_s(SYS_ALLINT_CLR, "xzr"));
+}
+
+static __always_inline void _allint_set(void)
+{
+	asm volatile(__msr_s(SYS_ALLINT_SET, "xzr"));
+}
+
+#endif
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 7d301700d1a9..0c07b740c750 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -126,6 +126,8 @@
  * System registers, organised loosely by encoding but grouped together
  * where the architected name contains an index. e.g. ID_MMFR<n>_EL1.
  */
+#define SYS_ALLINT_CLR			sys_reg(0, 1, 4, 0, 0)
+#define SYS_ALLINT_SET			sys_reg(0, 1, 4, 1, 0)
 #define SYS_SVCR_SMSTOP_SM_EL0		sys_reg(0, 3, 4, 2, 3)
 #define SYS_SVCR_SMSTART_SM_EL0		sys_reg(0, 3, 4, 3, 3)
 #define SYS_SVCR_SMSTOP_SMZA_EL0	sys_reg(0, 3, 4, 6, 3)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 05/14] arm64/asm: Introduce assembly macros for managing ALLINT
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:16   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:16 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

In order to allow assembly code to ensure that not even superpriorty
interrupts can preempt it provide macros for enabling and disabling
ALLINT.ALLINT.  This is not integrated into the existing DAIF macros
since we do not always wish to manage ALLINT along with DAIF and the
use of DAIF in the naming of the existing macros might lead to surprises
if ALLINT is also managed.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/assembler.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index e5957a53be39..88d9779a83c0 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -34,6 +34,22 @@
 	wx\n	.req	w\n
 	.endr
 
+	.macro	disable_allint
+#ifdef CONFIG_ARM64_NMI
+alternative_if ARM64_HAS_NMI
+	msr_s	SYS_ALLINT_SET, xzr
+alternative_else_nop_endif
+#endif
+	.endm
+
+	.macro	enable_allint
+#ifdef CONFIG_ARM64_NMI
+alternative_if ARM64_HAS_NMI
+	msr_s	SYS_ALLINT_CLR, xzr
+alternative_else_nop_endif
+#endif
+	.endm
+
 	.macro save_and_disable_daif, flags
 	mrs	\flags, daif
 	msr	daifset, #0xf
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 05/14] arm64/asm: Introduce assembly macros for managing ALLINT
@ 2022-11-12 15:16   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:16 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

In order to allow assembly code to ensure that not even superpriorty
interrupts can preempt it provide macros for enabling and disabling
ALLINT.ALLINT.  This is not integrated into the existing DAIF macros
since we do not always wish to manage ALLINT along with DAIF and the
use of DAIF in the naming of the existing macros might lead to surprises
if ALLINT is also managed.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/assembler.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index e5957a53be39..88d9779a83c0 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -34,6 +34,22 @@
 	wx\n	.req	w\n
 	.endr
 
+	.macro	disable_allint
+#ifdef CONFIG_ARM64_NMI
+alternative_if ARM64_HAS_NMI
+	msr_s	SYS_ALLINT_SET, xzr
+alternative_else_nop_endif
+#endif
+	.endm
+
+	.macro	enable_allint
+#ifdef CONFIG_ARM64_NMI
+alternative_if ARM64_HAS_NMI
+	msr_s	SYS_ALLINT_CLR, xzr
+alternative_else_nop_endif
+#endif
+	.endm
+
 	.macro save_and_disable_daif, flags
 	mrs	\flags, daif
 	msr	daifset, #0xf
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 06/14] arm64/hyp-stub: Enable access to ALLINT
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:17   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

In order to use NMIs we need to ensure that traps are disabled for it so
update HCRX_EL2 to ensure that TALLINT is not set when we detect support
for NMIs.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/kernel/hyp-stub.S | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index 2ee18c860f2a..4e0b06467973 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -151,6 +151,18 @@ SYM_CODE_START_LOCAL(__finalise_el2)
 
 .Lskip_sme:
 
+	// NMIs
+	__check_override id_aa64pfr1 ID_AA64PFR1_EL1_NMI_SHIFT 4 .Linit_nmi .Lskip_nmi
+.Linit_nmi:
+	mrs	x1, id_aa64mmfr1_el1		// HCRX_EL2 present?
+	ubfx	x1, x1, #ID_AA64MMFR1_EL1_HCX_SHIFT, #4
+	cbz	x1, .Lskip_nmi
+
+	mrs_s	x1, SYS_HCRX_EL2
+	and	x1, x1, #~HCRX_EL2_TALLINT_MASK	// Don't trap ALLINT
+	msr_s	SYS_HCRX_EL2, x1
+.Lskip_nmi:
+
 	// nVHE? No way! Give me the real thing!
 	// Sanity check: MMU *must* be off
 	mrs	x1, sctlr_el2
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 06/14] arm64/hyp-stub: Enable access to ALLINT
@ 2022-11-12 15:17   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

In order to use NMIs we need to ensure that traps are disabled for it so
update HCRX_EL2 to ensure that TALLINT is not set when we detect support
for NMIs.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/kernel/hyp-stub.S | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index 2ee18c860f2a..4e0b06467973 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -151,6 +151,18 @@ SYM_CODE_START_LOCAL(__finalise_el2)
 
 .Lskip_sme:
 
+	// NMIs
+	__check_override id_aa64pfr1 ID_AA64PFR1_EL1_NMI_SHIFT 4 .Linit_nmi .Lskip_nmi
+.Linit_nmi:
+	mrs	x1, id_aa64mmfr1_el1		// HCRX_EL2 present?
+	ubfx	x1, x1, #ID_AA64MMFR1_EL1_HCX_SHIFT, #4
+	cbz	x1, .Lskip_nmi
+
+	mrs_s	x1, SYS_HCRX_EL2
+	and	x1, x1, #~HCRX_EL2_TALLINT_MASK	// Don't trap ALLINT
+	msr_s	SYS_HCRX_EL2, x1
+.Lskip_nmi:
+
 	// nVHE? No way! Give me the real thing!
 	// Sanity check: MMU *must* be off
 	mrs	x1, sctlr_el2
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 07/14] arm64/idreg: Add an override for FEAT_NMI
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:17   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

Add a named override for FEAT_NMI, allowing it to be explicitly disabled
in case of problems.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/kernel/idreg-override.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index 95133765ed29..bb25aa3a414b 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -100,6 +100,7 @@ static const struct ftr_set_desc pfr1 __initconst = {
 	.fields		= {
 		FIELD("bt", ID_AA64PFR1_EL1_BT_SHIFT, NULL ),
 		FIELD("mte", ID_AA64PFR1_EL1_MTE_SHIFT, NULL),
+		FIELD("nmi", ID_AA64PFR1_EL1_NMI_SHIFT, NULL),
 		FIELD("sme", ID_AA64PFR1_EL1_SME_SHIFT, pfr1_sme_filter),
 		{}
 	},
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 07/14] arm64/idreg: Add an override for FEAT_NMI
@ 2022-11-12 15:17   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

Add a named override for FEAT_NMI, allowing it to be explicitly disabled
in case of problems.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/kernel/idreg-override.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index 95133765ed29..bb25aa3a414b 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -100,6 +100,7 @@ static const struct ftr_set_desc pfr1 __initconst = {
 	.fields		= {
 		FIELD("bt", ID_AA64PFR1_EL1_BT_SHIFT, NULL ),
 		FIELD("mte", ID_AA64PFR1_EL1_MTE_SHIFT, NULL),
+		FIELD("nmi", ID_AA64PFR1_EL1_NMI_SHIFT, NULL),
 		FIELD("sme", ID_AA64PFR1_EL1_SME_SHIFT, pfr1_sme_filter),
 		{}
 	},
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 08/14] arm64/cpufeature: Detect PE support for FEAT_NMI
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:17   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

Use of FEAT_NMI requires that all the PEs in the system and the GIC have NMI
support. This patch implements the PE part of that detection.

In order to avoid problematic interactions between real and pseudo NMIs
we disable the architected feature if the user has enabled pseudo NMIs
on the command line. If this is done on a system where support for the
architected feature is detected then a warning is printed during boot in
order to help users spot what is likely to be a misconfiguration.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/cpufeature.h |  6 ++++
 arch/arm64/kernel/cpufeature.c      | 55 ++++++++++++++++++++++++++++-
 arch/arm64/tools/cpucaps            |  1 +
 3 files changed, 61 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index f73f11b55042..85eeb331a0ef 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -809,6 +809,12 @@ static __always_inline bool system_uses_irq_prio_masking(void)
 	       cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
 }
 
+static __always_inline bool system_uses_nmi(void)
+{
+	return IS_ENABLED(CONFIG_ARM64_NMI) &&
+		cpus_have_const_cap(ARM64_HAS_NMI);
+}
+
 static inline bool system_supports_mte(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_MTE) &&
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 6062454a9067..18ab50b76f50 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -84,6 +84,7 @@
 #include <asm/kvm_host.h>
 #include <asm/mmu_context.h>
 #include <asm/mte.h>
+#include <asm/nmi.h>
 #include <asm/processor.h>
 #include <asm/smp.h>
 #include <asm/sysreg.h>
@@ -243,6 +244,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_NMI_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
 		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SME_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MPAM_frac_SHIFT, 4, 0),
@@ -2008,9 +2010,11 @@ static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap)
 }
 #endif /* CONFIG_ARM64_E0PD */
 
-#ifdef CONFIG_ARM64_PSEUDO_NMI
+#if IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) || IS_ENABLED(CONFIG_ARM64_NMI)
 static bool enable_pseudo_nmi;
+#endif
 
+#ifdef CONFIG_ARM64_PSEUDO_NMI
 static int __init early_enable_pseudo_nmi(char *p)
 {
 	return strtobool(p, &enable_pseudo_nmi);
@@ -2024,6 +2028,41 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
 }
 #endif
 
+#ifdef CONFIG_ARM64_NMI
+static bool has_nmi(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	if (!has_cpuid_feature(entry, scope))
+		return false;
+
+	/*
+	 * Having both real and pseudo NMIs enabled simultaneously is
+	 * likely to cause confusion.  Since pseudo NMIs must be
+	 * enabled with an explicit command line option, if the user
+	 * has set that option on a system with real NMIs for some
+	 * reason assume they know what they're doing.
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && enable_pseudo_nmi) {
+		pr_info("Pseudo NMI enabled, not using architected NMI\n");
+		return false;
+	}
+
+	return true;
+}
+
+static void nmi_enable(const struct arm64_cpu_capabilities *__unused)
+{
+	/*
+	 * Enable use of NMIs controlled by ALLINT, SPINTMASK should
+	 * be clear by default but make it explicit that we are using
+	 * this mode.  Ensure that ALLINT is clear first in order to
+	 * avoid leaving things masked.
+	 */
+	_allint_clear();
+	sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPINTMASK, SCTLR_EL1_NMI);
+	isb();
+}
+#endif
+
 #ifdef CONFIG_ARM64_BTI
 static void bti_enable(const struct arm64_cpu_capabilities *__unused)
 {
@@ -2640,6 +2679,20 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.matches = has_cpuid_feature,
 		.cpu_enable = cpu_trap_el0_impdef,
 	},
+#ifdef CONFIG_ARM64_NMI
+	{
+		.desc = "Non-maskable Interrupts",
+		.capability = ARM64_HAS_NMI,
+		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
+		.sys_reg = SYS_ID_AA64PFR1_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64PFR1_EL1_NMI_SHIFT,
+		.field_width = 4,
+		.min_field_value = ID_AA64PFR1_EL1_NMI_IMP,
+		.matches = has_nmi,
+		.cpu_enable = nmi_enable,
+	},
+#endif
 	{},
 };
 
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index f1c0347ec31a..fff7517ea590 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -30,6 +30,7 @@ HAS_GENERIC_AUTH_IMP_DEF
 HAS_IRQ_PRIO_MASKING
 HAS_LDAPR
 HAS_LSE_ATOMICS
+HAS_NMI
 HAS_NO_FPSIMD
 HAS_NO_HW_PREFETCH
 HAS_PAN
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 08/14] arm64/cpufeature: Detect PE support for FEAT_NMI
@ 2022-11-12 15:17   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

Use of FEAT_NMI requires that all the PEs in the system and the GIC have NMI
support. This patch implements the PE part of that detection.

In order to avoid problematic interactions between real and pseudo NMIs
we disable the architected feature if the user has enabled pseudo NMIs
on the command line. If this is done on a system where support for the
architected feature is detected then a warning is printed during boot in
order to help users spot what is likely to be a misconfiguration.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/cpufeature.h |  6 ++++
 arch/arm64/kernel/cpufeature.c      | 55 ++++++++++++++++++++++++++++-
 arch/arm64/tools/cpucaps            |  1 +
 3 files changed, 61 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index f73f11b55042..85eeb331a0ef 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -809,6 +809,12 @@ static __always_inline bool system_uses_irq_prio_masking(void)
 	       cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
 }
 
+static __always_inline bool system_uses_nmi(void)
+{
+	return IS_ENABLED(CONFIG_ARM64_NMI) &&
+		cpus_have_const_cap(ARM64_HAS_NMI);
+}
+
 static inline bool system_supports_mte(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_MTE) &&
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 6062454a9067..18ab50b76f50 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -84,6 +84,7 @@
 #include <asm/kvm_host.h>
 #include <asm/mmu_context.h>
 #include <asm/mte.h>
+#include <asm/nmi.h>
 #include <asm/processor.h>
 #include <asm/smp.h>
 #include <asm/sysreg.h>
@@ -243,6 +244,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_NMI_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
 		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SME_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MPAM_frac_SHIFT, 4, 0),
@@ -2008,9 +2010,11 @@ static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap)
 }
 #endif /* CONFIG_ARM64_E0PD */
 
-#ifdef CONFIG_ARM64_PSEUDO_NMI
+#if IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) || IS_ENABLED(CONFIG_ARM64_NMI)
 static bool enable_pseudo_nmi;
+#endif
 
+#ifdef CONFIG_ARM64_PSEUDO_NMI
 static int __init early_enable_pseudo_nmi(char *p)
 {
 	return strtobool(p, &enable_pseudo_nmi);
@@ -2024,6 +2028,41 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
 }
 #endif
 
+#ifdef CONFIG_ARM64_NMI
+static bool has_nmi(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	if (!has_cpuid_feature(entry, scope))
+		return false;
+
+	/*
+	 * Having both real and pseudo NMIs enabled simultaneously is
+	 * likely to cause confusion.  Since pseudo NMIs must be
+	 * enabled with an explicit command line option, if the user
+	 * has set that option on a system with real NMIs for some
+	 * reason assume they know what they're doing.
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && enable_pseudo_nmi) {
+		pr_info("Pseudo NMI enabled, not using architected NMI\n");
+		return false;
+	}
+
+	return true;
+}
+
+static void nmi_enable(const struct arm64_cpu_capabilities *__unused)
+{
+	/*
+	 * Enable use of NMIs controlled by ALLINT, SPINTMASK should
+	 * be clear by default but make it explicit that we are using
+	 * this mode.  Ensure that ALLINT is clear first in order to
+	 * avoid leaving things masked.
+	 */
+	_allint_clear();
+	sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPINTMASK, SCTLR_EL1_NMI);
+	isb();
+}
+#endif
+
 #ifdef CONFIG_ARM64_BTI
 static void bti_enable(const struct arm64_cpu_capabilities *__unused)
 {
@@ -2640,6 +2679,20 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.matches = has_cpuid_feature,
 		.cpu_enable = cpu_trap_el0_impdef,
 	},
+#ifdef CONFIG_ARM64_NMI
+	{
+		.desc = "Non-maskable Interrupts",
+		.capability = ARM64_HAS_NMI,
+		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
+		.sys_reg = SYS_ID_AA64PFR1_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64PFR1_EL1_NMI_SHIFT,
+		.field_width = 4,
+		.min_field_value = ID_AA64PFR1_EL1_NMI_IMP,
+		.matches = has_nmi,
+		.cpu_enable = nmi_enable,
+	},
+#endif
 	{},
 };
 
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index f1c0347ec31a..fff7517ea590 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -30,6 +30,7 @@ HAS_GENERIC_AUTH_IMP_DEF
 HAS_IRQ_PRIO_MASKING
 HAS_LDAPR
 HAS_LSE_ATOMICS
+HAS_NMI
 HAS_NO_FPSIMD
 HAS_NO_HW_PREFETCH
 HAS_PAN
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 09/14] KVM: arm64: Hide FEAT_NMI from guests
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:17   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

FEAT_NMI is not yet useful to guests pending implementation of vGIC
support. Mask out the feature from the ID register and prevent guests
creating state in ALLINT.ALLINT by activating the trap on write provided
in HCRX_EL2.TALLINT when they are running. There is no trap available
for reads from ALLINT.

We do not need to check for FEAT_HCRX since it is mandatory since v8.7
and FEAT_NMI is a v8.8 feature.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 6 ++++++
 arch/arm64/kvm/sys_regs.c               | 1 +
 2 files changed, 7 insertions(+)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 6cbbb6c02f66..89e78c4e5cce 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -85,6 +85,9 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
 	}
 
+	if (cpus_have_final_cap(ARM64_HAS_NMI))
+		sysreg_clear_set_s(SYS_HCRX_EL2, 0, HCRX_EL2_TALLINT);
+
 	vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 }
@@ -93,6 +96,9 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
 {
 	write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
 
+	if (cpus_have_final_cap(ARM64_HAS_NMI))
+		sysreg_clear_set_s(SYS_HCRX_EL2, HCRX_EL2_TALLINT, 0);
+
 	write_sysreg(0, hstr_el2);
 	if (kvm_arm_support_pmu_v3())
 		write_sysreg(0, pmuserenr_el0);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f4a7c5abcbca..1bd4d4109a05 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1092,6 +1092,7 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r
 			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE);
 
 		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME);
+		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI);
 		break;
 	case SYS_ID_AA64ISAR1_EL1:
 		if (!vcpu_has_ptrauth(vcpu))
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 09/14] KVM: arm64: Hide FEAT_NMI from guests
@ 2022-11-12 15:17   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

FEAT_NMI is not yet useful to guests pending implementation of vGIC
support. Mask out the feature from the ID register and prevent guests
creating state in ALLINT.ALLINT by activating the trap on write provided
in HCRX_EL2.TALLINT when they are running. There is no trap available
for reads from ALLINT.

We do not need to check for FEAT_HCRX since it is mandatory since v8.7
and FEAT_NMI is a v8.8 feature.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 6 ++++++
 arch/arm64/kvm/sys_regs.c               | 1 +
 2 files changed, 7 insertions(+)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 6cbbb6c02f66..89e78c4e5cce 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -85,6 +85,9 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
 	}
 
+	if (cpus_have_final_cap(ARM64_HAS_NMI))
+		sysreg_clear_set_s(SYS_HCRX_EL2, 0, HCRX_EL2_TALLINT);
+
 	vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 }
@@ -93,6 +96,9 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
 {
 	write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
 
+	if (cpus_have_final_cap(ARM64_HAS_NMI))
+		sysreg_clear_set_s(SYS_HCRX_EL2, HCRX_EL2_TALLINT, 0);
+
 	write_sysreg(0, hstr_el2);
 	if (kvm_arm_support_pmu_v3())
 		write_sysreg(0, pmuserenr_el0);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f4a7c5abcbca..1bd4d4109a05 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1092,6 +1092,7 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r
 			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE);
 
 		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME);
+		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI);
 		break;
 	case SYS_ID_AA64ISAR1_EL1:
 		if (!vcpu_has_ptrauth(vcpu))
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:17   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

As we do for pseudo NMIs add code to our DAIF management which keeps
superpriority interrupts unmasked when we have asynchronous exceptions
enabled. Since superpriority interrupts are not masked through DAIF like
pseduo NMIs are we also need to modify the assembler macros for managing
DAIF to ensure that the masking is done in the assembly code. At present
users of the assembly macros always mask pseudo NMIs.

There is a difference to the actual handling between pseudo NMIs
and superpriority interrupts in the assembly save_and_disable_irq and
restore_irq macros, these cover both interrupts and FIQs using DAIF
without regard for the use of pseudo NMIs so also mask those but are not
updated here to mask superpriority interrupts. Given the names it is not
clear that the behaviour with pseudo NMIs is particularly intentional,
and in any case these macros are only used in the implementation of
alternatives for software PAN while hardware PAN has been mandatory
since v8.1 so it is not anticipated that practical systems with support
for FEAT_NMI will ever execute the affected code.

This should be a conservative set of masked regions, we may be able to
relax this in future, but this should represent a good starting point.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/assembler.h | 11 +++++++++++
 arch/arm64/include/asm/daifflags.h | 18 ++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 88d9779a83c0..e85a7e9af9ae 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -52,19 +52,30 @@ alternative_else_nop_endif
 
 	.macro save_and_disable_daif, flags
 	mrs	\flags, daif
+        disable_allint
 	msr	daifset, #0xf
 	.endm
 
 	.macro disable_daif
+        disable_allint
 	msr	daifset, #0xf
 	.endm
 
 	.macro enable_daif
 	msr	daifclr, #0xf
+	enable_allint
 	.endm
 
 	.macro	restore_daif, flags:req
 	msr	daif, \flags
+#ifdef CONFIG_ARM64_NMI
+alternative_if ARM64_HAS_NMI
+	/* If async exceptions are unmasked we can take NMIs */
+	tbnz	\flags, #8, 2004f
+	msr_s	SYS_ALLINT_CLR, xzr
+2004:
+alternative_else_nop_endif
+#endif
 	.endm
 
 	/* IRQ/FIQ are the lowest priority flags, unconditionally unmask the rest. */
diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index b3bed2004342..fda73976068f 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -10,6 +10,7 @@
 #include <asm/arch_gicv3.h>
 #include <asm/barrier.h>
 #include <asm/cpufeature.h>
+#include <asm/nmi.h>
 #include <asm/ptrace.h>
 
 #define DAIF_PROCCTX		0
@@ -35,6 +36,9 @@ static inline void local_daif_mask(void)
 	if (system_uses_irq_prio_masking())
 		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
 
+	if (system_uses_nmi())
+		_allint_set();
+
 	trace_hardirqs_off();
 }
 
@@ -50,6 +54,12 @@ static inline unsigned long local_daif_save_flags(void)
 			flags |= PSR_I_BIT | PSR_F_BIT;
 	}
 
+	if (system_uses_nmi()) {
+		/* If IRQs are masked with ALLINT, reflect in in the flags */
+		if (read_sysreg_s(SYS_ALLINT) & ALLINT_ALLINT)
+			flags |= PSR_I_BIT | PSR_F_BIT;
+	}
+
 	return flags;
 }
 
@@ -114,6 +124,10 @@ static inline void local_daif_restore(unsigned long flags)
 		gic_write_pmr(pmr);
 	}
 
+	/* If we can take asynchronous errors we can take NMIs */
+	if (system_uses_nmi() && !(flags & PSR_A_BIT))
+		_allint_clear();
+
 	write_sysreg(flags, daif);
 
 	if (irq_disabled)
@@ -131,6 +145,10 @@ static inline void local_daif_inherit(struct pt_regs *regs)
 	if (interrupts_enabled(regs))
 		trace_hardirqs_on();
 
+	/* If we can take asynchronous errors we can take NMIs */
+	if (system_uses_nmi() && !(flags & PSR_A_BIT))
+		_allint_clear();
+
 	if (system_uses_irq_prio_masking())
 		gic_write_pmr(regs->pmr_save);
 
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
@ 2022-11-12 15:17   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

As we do for pseudo NMIs add code to our DAIF management which keeps
superpriority interrupts unmasked when we have asynchronous exceptions
enabled. Since superpriority interrupts are not masked through DAIF like
pseduo NMIs are we also need to modify the assembler macros for managing
DAIF to ensure that the masking is done in the assembly code. At present
users of the assembly macros always mask pseudo NMIs.

There is a difference to the actual handling between pseudo NMIs
and superpriority interrupts in the assembly save_and_disable_irq and
restore_irq macros, these cover both interrupts and FIQs using DAIF
without regard for the use of pseudo NMIs so also mask those but are not
updated here to mask superpriority interrupts. Given the names it is not
clear that the behaviour with pseudo NMIs is particularly intentional,
and in any case these macros are only used in the implementation of
alternatives for software PAN while hardware PAN has been mandatory
since v8.1 so it is not anticipated that practical systems with support
for FEAT_NMI will ever execute the affected code.

This should be a conservative set of masked regions, we may be able to
relax this in future, but this should represent a good starting point.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/assembler.h | 11 +++++++++++
 arch/arm64/include/asm/daifflags.h | 18 ++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 88d9779a83c0..e85a7e9af9ae 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -52,19 +52,30 @@ alternative_else_nop_endif
 
 	.macro save_and_disable_daif, flags
 	mrs	\flags, daif
+        disable_allint
 	msr	daifset, #0xf
 	.endm
 
 	.macro disable_daif
+        disable_allint
 	msr	daifset, #0xf
 	.endm
 
 	.macro enable_daif
 	msr	daifclr, #0xf
+	enable_allint
 	.endm
 
 	.macro	restore_daif, flags:req
 	msr	daif, \flags
+#ifdef CONFIG_ARM64_NMI
+alternative_if ARM64_HAS_NMI
+	/* If async exceptions are unmasked we can take NMIs */
+	tbnz	\flags, #8, 2004f
+	msr_s	SYS_ALLINT_CLR, xzr
+2004:
+alternative_else_nop_endif
+#endif
 	.endm
 
 	/* IRQ/FIQ are the lowest priority flags, unconditionally unmask the rest. */
diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index b3bed2004342..fda73976068f 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -10,6 +10,7 @@
 #include <asm/arch_gicv3.h>
 #include <asm/barrier.h>
 #include <asm/cpufeature.h>
+#include <asm/nmi.h>
 #include <asm/ptrace.h>
 
 #define DAIF_PROCCTX		0
@@ -35,6 +36,9 @@ static inline void local_daif_mask(void)
 	if (system_uses_irq_prio_masking())
 		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
 
+	if (system_uses_nmi())
+		_allint_set();
+
 	trace_hardirqs_off();
 }
 
@@ -50,6 +54,12 @@ static inline unsigned long local_daif_save_flags(void)
 			flags |= PSR_I_BIT | PSR_F_BIT;
 	}
 
+	if (system_uses_nmi()) {
+		/* If IRQs are masked with ALLINT, reflect in in the flags */
+		if (read_sysreg_s(SYS_ALLINT) & ALLINT_ALLINT)
+			flags |= PSR_I_BIT | PSR_F_BIT;
+	}
+
 	return flags;
 }
 
@@ -114,6 +124,10 @@ static inline void local_daif_restore(unsigned long flags)
 		gic_write_pmr(pmr);
 	}
 
+	/* If we can take asynchronous errors we can take NMIs */
+	if (system_uses_nmi() && !(flags & PSR_A_BIT))
+		_allint_clear();
+
 	write_sysreg(flags, daif);
 
 	if (irq_disabled)
@@ -131,6 +145,10 @@ static inline void local_daif_inherit(struct pt_regs *regs)
 	if (interrupts_enabled(regs))
 		trace_hardirqs_on();
 
+	/* If we can take asynchronous errors we can take NMIs */
+	if (system_uses_nmi() && !(flags & PSR_A_BIT))
+		_allint_clear();
+
 	if (system_uses_irq_prio_masking())
 		gic_write_pmr(regs->pmr_save);
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 11/14] arm64/irq: Document handling of FEAT_NMI in irqflags.h
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:17   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

We have documentation at the top of irqflags.h which explains the DAIF
masking. Since the additional masking with NMIs is related and also
covers the IF in DAIF extend the comment to note what's going on with NMIs
though none of the code in irqflags.h is updated to handle NMIs.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/irqflags.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/include/asm/irqflags.h b/arch/arm64/include/asm/irqflags.h
index b57b9b1e4344..e3f68db456e3 100644
--- a/arch/arm64/include/asm/irqflags.h
+++ b/arch/arm64/include/asm/irqflags.h
@@ -19,6 +19,16 @@
  * always masked and unmasked together, and have no side effects for other
  * flags. Keeping to this order makes it easier for entry.S to know which
  * exceptions should be unmasked.
+ *
+ * With the addition of the FEAT_NMI extension we gain an additional
+ * class of superpriority IRQ/FIQ which is separately masked with a
+ * choice of modes controlled by SCTLR_ELn.{SPINTMASK,NMI}.  Linux
+ * sets SPINTMASK to 0 and NMI to 1 which results in ALLINT.ALLINT
+ * masking both superpriority interrupts and IRQ/FIQ regardless of the
+ * I and F settings. Since these superpriority interrupts are being
+ * used as NMIs we do not include them in the interrupt masking here,
+ * anything that requires that NMIs be masked needs to explicitly do
+ * so.
  */
 
 /*
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 11/14] arm64/irq: Document handling of FEAT_NMI in irqflags.h
@ 2022-11-12 15:17   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

We have documentation at the top of irqflags.h which explains the DAIF
masking. Since the additional masking with NMIs is related and also
covers the IF in DAIF extend the comment to note what's going on with NMIs
though none of the code in irqflags.h is updated to handle NMIs.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/irqflags.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/include/asm/irqflags.h b/arch/arm64/include/asm/irqflags.h
index b57b9b1e4344..e3f68db456e3 100644
--- a/arch/arm64/include/asm/irqflags.h
+++ b/arch/arm64/include/asm/irqflags.h
@@ -19,6 +19,16 @@
  * always masked and unmasked together, and have no side effects for other
  * flags. Keeping to this order makes it easier for entry.S to know which
  * exceptions should be unmasked.
+ *
+ * With the addition of the FEAT_NMI extension we gain an additional
+ * class of superpriority IRQ/FIQ which is separately masked with a
+ * choice of modes controlled by SCTLR_ELn.{SPINTMASK,NMI}.  Linux
+ * sets SPINTMASK to 0 and NMI to 1 which results in ALLINT.ALLINT
+ * masking both superpriority interrupts and IRQ/FIQ regardless of the
+ * I and F settings. Since these superpriority interrupts are being
+ * used as NMIs we do not include them in the interrupt masking here,
+ * anything that requires that NMIs be masked needs to explicitly do
+ * so.
  */
 
 /*
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 12/14] arm64/nmi: Add handling of superpriority interrupts as NMIs
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:17   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

Our goal with superpriority interrupts is to use them as NMIs, taking
advantage of the much smaller regions where they are masked to allow
prompt handling of the most time critical interrupts.

When an interrupt configured with superpriority we will enter EL1 as
normal for any interrupt, the presence of a superpriority interrupt is
indicated with a status bit in ISR_EL1. We use this to check for the
presence of a superpriority interrupt before we unmask anything in
elX_interrupt(), reporting without unmasking any interrupts. If no
superpriority interrupt is present then we handle normal interrupts as
normal, superpriority interrupts will be unmasked while doing so as a
result of setting DAIF_PROCCTX.

Both IRQs and FIQs may be configured with superpriority so we handle
both, passing an additional root handler into the elX_interrupt()
function along with the mask for the bit in ISR_EL1 which indicates the
presence of the relevant kind of superpriority interrupt. These root
handlers can be configured by the interrupt controller similarly to the
root handlers for normal interrupts using the newly added
set_handle_nmi_irq() and set_handle_nmi_fiq() functions.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/irq.h     |  2 ++
 arch/arm64/kernel/entry-common.c | 55 +++++++++++++++++++++++++++-----
 arch/arm64/kernel/irq.c          | 32 +++++++++++++++++++
 3 files changed, 81 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/irq.h b/arch/arm64/include/asm/irq.h
index fac08e18bcd5..2ab05d899bf6 100644
--- a/arch/arm64/include/asm/irq.h
+++ b/arch/arm64/include/asm/irq.h
@@ -8,6 +8,8 @@
 
 struct pt_regs;
 
+int set_handle_nmi_irq(void (*handle_irq)(struct pt_regs *));
+int set_handle_nmi_fiq(void (*handle_fiq)(struct pt_regs *));
 int set_handle_irq(void (*handle_irq)(struct pt_regs *));
 #define set_handle_irq	set_handle_irq
 int set_handle_fiq(void (*handle_fiq)(struct pt_regs *));
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 9173fad279af..eb6fc718737e 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -278,6 +278,8 @@ static void do_interrupt_handler(struct pt_regs *regs,
 	set_irq_regs(old_regs);
 }
 
+extern void (*handle_arch_nmi_irq)(struct pt_regs *);
+extern void (*handle_arch_nmi_fiq)(struct pt_regs *);
 extern void (*handle_arch_irq)(struct pt_regs *);
 extern void (*handle_arch_fiq)(struct pt_regs *);
 
@@ -453,6 +455,14 @@ asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
 	}
 }
 
+static __always_inline void __el1_nmi(struct pt_regs *regs,
+				      void (*handler)(struct pt_regs *))
+{
+	arm64_enter_nmi(regs);
+	do_interrupt_handler(regs, handler);
+	arm64_exit_nmi(regs);
+}
+
 static __always_inline void __el1_pnmi(struct pt_regs *regs,
 				       void (*handler)(struct pt_regs *))
 {
@@ -474,9 +484,19 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
 
 	exit_to_kernel_mode(regs);
 }
-static void noinstr el1_interrupt(struct pt_regs *regs,
-				  void (*handler)(struct pt_regs *))
+
+static void noinstr el1_interrupt(struct pt_regs *regs, u64 nmi_flag,
+				  void (*handler)(struct pt_regs *),
+				  void (*nmi_handler)(struct pt_regs *))
 {
+	if (system_uses_nmi()) {
+		/* Is there a NMI to handle? */
+		if (read_sysreg(isr_el1) & nmi_flag) {
+			__el1_nmi(regs, nmi_handler);
+			return;
+		}
+	}
+
 	write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
 
 	if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs))
@@ -487,12 +507,12 @@ static void noinstr el1_interrupt(struct pt_regs *regs,
 
 asmlinkage void noinstr el1h_64_irq_handler(struct pt_regs *regs)
 {
-	el1_interrupt(regs, handle_arch_irq);
+	el1_interrupt(regs, ISR_EL1_IS, handle_arch_irq, handle_arch_nmi_irq);
 }
 
 asmlinkage void noinstr el1h_64_fiq_handler(struct pt_regs *regs)
 {
-	el1_interrupt(regs, handle_arch_fiq);
+	el1_interrupt(regs, ISR_EL1_FS, handle_arch_fiq, handle_arch_nmi_fiq);
 }
 
 asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs)
@@ -701,11 +721,30 @@ asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs)
 	}
 }
 
-static void noinstr el0_interrupt(struct pt_regs *regs,
-				  void (*handler)(struct pt_regs *))
+static void noinstr el0_interrupt(struct pt_regs *regs, u64 nmi_flag,
+				  void (*handler)(struct pt_regs *),
+				  void (*nmi_handler)(struct pt_regs *))
 {
 	enter_from_user_mode(regs);
 
+	if (system_uses_nmi()) {
+		/* Is there a NMI to handle? */
+		if (read_sysreg(isr_el1) & nmi_flag) {
+			/*
+			 * Any system with FEAT_NMI should not be
+			 * affected by Spectre v2 so we don't mitigate
+			 * here.
+			 */
+
+			arm64_enter_nmi(regs);
+			do_interrupt_handler(regs, nmi_handler);
+			arm64_exit_nmi(regs);
+
+			exit_to_user_mode(regs);
+			return;
+		}
+	}
+
 	write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
 
 	if (regs->pc & BIT(55))
@@ -720,7 +759,7 @@ static void noinstr el0_interrupt(struct pt_regs *regs,
 
 static void noinstr __el0_irq_handler_common(struct pt_regs *regs)
 {
-	el0_interrupt(regs, handle_arch_irq);
+	el0_interrupt(regs, ISR_EL1_IS, handle_arch_irq, handle_arch_nmi_irq);
 }
 
 asmlinkage void noinstr el0t_64_irq_handler(struct pt_regs *regs)
@@ -730,7 +769,7 @@ asmlinkage void noinstr el0t_64_irq_handler(struct pt_regs *regs)
 
 static void noinstr __el0_fiq_handler_common(struct pt_regs *regs)
 {
-	el0_interrupt(regs, handle_arch_fiq);
+	el0_interrupt(regs, ISR_EL1_FS, handle_arch_fiq, handle_arch_nmi_fiq);
 }
 
 asmlinkage void noinstr el0t_64_fiq_handler(struct pt_regs *regs)
diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
index 38dbd3828f13..77a1ea90b244 100644
--- a/arch/arm64/kernel/irq.c
+++ b/arch/arm64/kernel/irq.c
@@ -85,6 +85,16 @@ void do_softirq_own_stack(void)
 }
 #endif
 
+static void default_handle_nmi_irq(struct pt_regs *regs)
+{
+	panic("Superpriority IRQ taken without a root NMI IRQ handler\n");
+}
+
+static void default_handle_nmi_fiq(struct pt_regs *regs)
+{
+	panic("Superpriority FIQ taken without a root NMI FIQ handler\n");
+}
+
 static void default_handle_irq(struct pt_regs *regs)
 {
 	panic("IRQ taken without a root IRQ handler\n");
@@ -95,9 +105,31 @@ static void default_handle_fiq(struct pt_regs *regs)
 	panic("FIQ taken without a root FIQ handler\n");
 }
 
+void (*handle_arch_nmi_irq)(struct pt_regs *) __ro_after_init = default_handle_nmi_irq;
+void (*handle_arch_nmi_fiq)(struct pt_regs *) __ro_after_init = default_handle_nmi_fiq;
 void (*handle_arch_irq)(struct pt_regs *) __ro_after_init = default_handle_irq;
 void (*handle_arch_fiq)(struct pt_regs *) __ro_after_init = default_handle_fiq;
 
+int __init set_handle_nmi_irq(void (*handle_nmi_irq)(struct pt_regs *))
+{
+	if (handle_arch_nmi_irq != default_handle_nmi_irq)
+		return -EBUSY;
+
+	handle_arch_nmi_irq = handle_nmi_irq;
+	pr_info("Root superpriority IRQ handler: %ps\n", handle_nmi_irq);
+	return 0;
+}
+
+int __init set_handle_nmi_fiq(void (*handle_nmi_fiq)(struct pt_regs *))
+{
+	if (handle_arch_nmi_fiq != default_handle_nmi_fiq)
+		return -EBUSY;
+
+	handle_arch_nmi_fiq = handle_nmi_fiq;
+	pr_info("Root superpriority FIQ handler: %ps\n", handle_nmi_fiq);
+	return 0;
+}
+
 int __init set_handle_irq(void (*handle_irq)(struct pt_regs *))
 {
 	if (handle_arch_irq != default_handle_irq)
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 12/14] arm64/nmi: Add handling of superpriority interrupts as NMIs
@ 2022-11-12 15:17   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

Our goal with superpriority interrupts is to use them as NMIs, taking
advantage of the much smaller regions where they are masked to allow
prompt handling of the most time critical interrupts.

When an interrupt configured with superpriority we will enter EL1 as
normal for any interrupt, the presence of a superpriority interrupt is
indicated with a status bit in ISR_EL1. We use this to check for the
presence of a superpriority interrupt before we unmask anything in
elX_interrupt(), reporting without unmasking any interrupts. If no
superpriority interrupt is present then we handle normal interrupts as
normal, superpriority interrupts will be unmasked while doing so as a
result of setting DAIF_PROCCTX.

Both IRQs and FIQs may be configured with superpriority so we handle
both, passing an additional root handler into the elX_interrupt()
function along with the mask for the bit in ISR_EL1 which indicates the
presence of the relevant kind of superpriority interrupt. These root
handlers can be configured by the interrupt controller similarly to the
root handlers for normal interrupts using the newly added
set_handle_nmi_irq() and set_handle_nmi_fiq() functions.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/irq.h     |  2 ++
 arch/arm64/kernel/entry-common.c | 55 +++++++++++++++++++++++++++-----
 arch/arm64/kernel/irq.c          | 32 +++++++++++++++++++
 3 files changed, 81 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/irq.h b/arch/arm64/include/asm/irq.h
index fac08e18bcd5..2ab05d899bf6 100644
--- a/arch/arm64/include/asm/irq.h
+++ b/arch/arm64/include/asm/irq.h
@@ -8,6 +8,8 @@
 
 struct pt_regs;
 
+int set_handle_nmi_irq(void (*handle_irq)(struct pt_regs *));
+int set_handle_nmi_fiq(void (*handle_fiq)(struct pt_regs *));
 int set_handle_irq(void (*handle_irq)(struct pt_regs *));
 #define set_handle_irq	set_handle_irq
 int set_handle_fiq(void (*handle_fiq)(struct pt_regs *));
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 9173fad279af..eb6fc718737e 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -278,6 +278,8 @@ static void do_interrupt_handler(struct pt_regs *regs,
 	set_irq_regs(old_regs);
 }
 
+extern void (*handle_arch_nmi_irq)(struct pt_regs *);
+extern void (*handle_arch_nmi_fiq)(struct pt_regs *);
 extern void (*handle_arch_irq)(struct pt_regs *);
 extern void (*handle_arch_fiq)(struct pt_regs *);
 
@@ -453,6 +455,14 @@ asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
 	}
 }
 
+static __always_inline void __el1_nmi(struct pt_regs *regs,
+				      void (*handler)(struct pt_regs *))
+{
+	arm64_enter_nmi(regs);
+	do_interrupt_handler(regs, handler);
+	arm64_exit_nmi(regs);
+}
+
 static __always_inline void __el1_pnmi(struct pt_regs *regs,
 				       void (*handler)(struct pt_regs *))
 {
@@ -474,9 +484,19 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
 
 	exit_to_kernel_mode(regs);
 }
-static void noinstr el1_interrupt(struct pt_regs *regs,
-				  void (*handler)(struct pt_regs *))
+
+static void noinstr el1_interrupt(struct pt_regs *regs, u64 nmi_flag,
+				  void (*handler)(struct pt_regs *),
+				  void (*nmi_handler)(struct pt_regs *))
 {
+	if (system_uses_nmi()) {
+		/* Is there a NMI to handle? */
+		if (read_sysreg(isr_el1) & nmi_flag) {
+			__el1_nmi(regs, nmi_handler);
+			return;
+		}
+	}
+
 	write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
 
 	if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs))
@@ -487,12 +507,12 @@ static void noinstr el1_interrupt(struct pt_regs *regs,
 
 asmlinkage void noinstr el1h_64_irq_handler(struct pt_regs *regs)
 {
-	el1_interrupt(regs, handle_arch_irq);
+	el1_interrupt(regs, ISR_EL1_IS, handle_arch_irq, handle_arch_nmi_irq);
 }
 
 asmlinkage void noinstr el1h_64_fiq_handler(struct pt_regs *regs)
 {
-	el1_interrupt(regs, handle_arch_fiq);
+	el1_interrupt(regs, ISR_EL1_FS, handle_arch_fiq, handle_arch_nmi_fiq);
 }
 
 asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs)
@@ -701,11 +721,30 @@ asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs)
 	}
 }
 
-static void noinstr el0_interrupt(struct pt_regs *regs,
-				  void (*handler)(struct pt_regs *))
+static void noinstr el0_interrupt(struct pt_regs *regs, u64 nmi_flag,
+				  void (*handler)(struct pt_regs *),
+				  void (*nmi_handler)(struct pt_regs *))
 {
 	enter_from_user_mode(regs);
 
+	if (system_uses_nmi()) {
+		/* Is there a NMI to handle? */
+		if (read_sysreg(isr_el1) & nmi_flag) {
+			/*
+			 * Any system with FEAT_NMI should not be
+			 * affected by Spectre v2 so we don't mitigate
+			 * here.
+			 */
+
+			arm64_enter_nmi(regs);
+			do_interrupt_handler(regs, nmi_handler);
+			arm64_exit_nmi(regs);
+
+			exit_to_user_mode(regs);
+			return;
+		}
+	}
+
 	write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
 
 	if (regs->pc & BIT(55))
@@ -720,7 +759,7 @@ static void noinstr el0_interrupt(struct pt_regs *regs,
 
 static void noinstr __el0_irq_handler_common(struct pt_regs *regs)
 {
-	el0_interrupt(regs, handle_arch_irq);
+	el0_interrupt(regs, ISR_EL1_IS, handle_arch_irq, handle_arch_nmi_irq);
 }
 
 asmlinkage void noinstr el0t_64_irq_handler(struct pt_regs *regs)
@@ -730,7 +769,7 @@ asmlinkage void noinstr el0t_64_irq_handler(struct pt_regs *regs)
 
 static void noinstr __el0_fiq_handler_common(struct pt_regs *regs)
 {
-	el0_interrupt(regs, handle_arch_fiq);
+	el0_interrupt(regs, ISR_EL1_FS, handle_arch_fiq, handle_arch_nmi_fiq);
 }
 
 asmlinkage void noinstr el0t_64_fiq_handler(struct pt_regs *regs)
diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
index 38dbd3828f13..77a1ea90b244 100644
--- a/arch/arm64/kernel/irq.c
+++ b/arch/arm64/kernel/irq.c
@@ -85,6 +85,16 @@ void do_softirq_own_stack(void)
 }
 #endif
 
+static void default_handle_nmi_irq(struct pt_regs *regs)
+{
+	panic("Superpriority IRQ taken without a root NMI IRQ handler\n");
+}
+
+static void default_handle_nmi_fiq(struct pt_regs *regs)
+{
+	panic("Superpriority FIQ taken without a root NMI FIQ handler\n");
+}
+
 static void default_handle_irq(struct pt_regs *regs)
 {
 	panic("IRQ taken without a root IRQ handler\n");
@@ -95,9 +105,31 @@ static void default_handle_fiq(struct pt_regs *regs)
 	panic("FIQ taken without a root FIQ handler\n");
 }
 
+void (*handle_arch_nmi_irq)(struct pt_regs *) __ro_after_init = default_handle_nmi_irq;
+void (*handle_arch_nmi_fiq)(struct pt_regs *) __ro_after_init = default_handle_nmi_fiq;
 void (*handle_arch_irq)(struct pt_regs *) __ro_after_init = default_handle_irq;
 void (*handle_arch_fiq)(struct pt_regs *) __ro_after_init = default_handle_fiq;
 
+int __init set_handle_nmi_irq(void (*handle_nmi_irq)(struct pt_regs *))
+{
+	if (handle_arch_nmi_irq != default_handle_nmi_irq)
+		return -EBUSY;
+
+	handle_arch_nmi_irq = handle_nmi_irq;
+	pr_info("Root superpriority IRQ handler: %ps\n", handle_nmi_irq);
+	return 0;
+}
+
+int __init set_handle_nmi_fiq(void (*handle_nmi_fiq)(struct pt_regs *))
+{
+	if (handle_arch_nmi_fiq != default_handle_nmi_fiq)
+		return -EBUSY;
+
+	handle_arch_nmi_fiq = handle_nmi_fiq;
+	pr_info("Root superpriority FIQ handler: %ps\n", handle_nmi_fiq);
+	return 0;
+}
+
 int __init set_handle_irq(void (*handle_irq)(struct pt_regs *))
 {
 	if (handle_arch_irq != default_handle_irq)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 13/14] arm64/nmi: Add Kconfig for NMI
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:17   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

Since NMI handling is in some fairly hot paths we provide a Kconfig option
which allows support to be compiled out when not needed.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/Kconfig | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 505c8a1ccbe0..dd987a9a4f81 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2001,6 +2001,23 @@ config ARM64_EPAN
 	  if the cpu does not implement the feature.
 endmenu # "ARMv8.7 architectural features"
 
+menu "ARMv8.8 architectural features"
+
+config ARM64_NMI
+	bool "Enable support for Non-maskable Interrupts (NMI)"
+	default y
+	help
+	  Non-maskable interrupts are an architecture and GIC feature
+	  which allow the system to configure some interrupts to be
+	  configured to have superpriority, allowing them to be handled
+	  before other interrupts and masked for shorter periods of time.
+
+	  The feature is detected at runtime, and will remain disabled
+	  if the cpu does not implement the feature. It will also be
+	  disabled if pseudo NMIs are enabled at runtime.
+
+endmenu # "ARMv8.8 architectural features"
+
 config ARM64_SVE
 	bool "ARM Scalable Vector Extension support"
 	default y
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 13/14] arm64/nmi: Add Kconfig for NMI
@ 2022-11-12 15:17   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

Since NMI handling is in some fairly hot paths we provide a Kconfig option
which allows support to be compiled out when not needed.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/Kconfig | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 505c8a1ccbe0..dd987a9a4f81 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2001,6 +2001,23 @@ config ARM64_EPAN
 	  if the cpu does not implement the feature.
 endmenu # "ARMv8.7 architectural features"
 
+menu "ARMv8.8 architectural features"
+
+config ARM64_NMI
+	bool "Enable support for Non-maskable Interrupts (NMI)"
+	default y
+	help
+	  Non-maskable interrupts are an architecture and GIC feature
+	  which allow the system to configure some interrupts to be
+	  configured to have superpriority, allowing them to be handled
+	  before other interrupts and masked for shorter periods of time.
+
+	  The feature is detected at runtime, and will remain disabled
+	  if the cpu does not implement the feature. It will also be
+	  disabled if pseudo NMIs are enabled at runtime.
+
+endmenu # "ARMv8.8 architectural features"
+
 config ARM64_SVE
 	bool "ARM Scalable Vector Extension support"
 	default y
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 14/14] irqchip/gic-v3: Implement FEAT_GICv3_NMI support
  2022-11-12 15:16 ` Mark Brown
@ 2022-11-12 15:17   ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

From: Lorenzo Pieralisi <lpieralisi@kernel.org>

The FEAT_GICv3_NMI GIC feature coupled with the CPU FEAT_NMI enables
handling NMI interrupts in HW on aarch64, by adding a superpriority
interrupt to the existing GIC priority scheme.

Implement GIC driver support for the FEAT_GICv3_NMI feature.

Rename gic_supports_nmi() helper function to gic_supports_pseudo_nmis()
to make the pseudo NMIs code path clearer and more explicit.

Check, through the ARM64 capabilitity infrastructure, if support
for FEAT_NMI was detected on the core and the system has not overridden
the detection and forced pseudo-NMIs enablement.

If FEAT_NMI is detected, it was not overridden (check embedded in the
system_uses_nmi() call) and the GIC supports the FEAT_GICv3_NMI feature,
install an NMI handler and initialize NMIs related HW GIC registers.

Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
---
 drivers/irqchip/irq-gic-v3.c       | 143 ++++++++++++++++++++++++-----
 include/linux/irqchip/arm-gic-v3.h |   4 +
 2 files changed, 125 insertions(+), 22 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 34d58567b78d..dc45e1093e7b 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -54,6 +54,7 @@ struct gic_chip_data {
 	u32			nr_redist_regions;
 	u64			flags;
 	bool			has_rss;
+	bool			has_nmi;
 	unsigned int		ppi_nr;
 	struct partition_desc	**ppi_descs;
 };
@@ -145,6 +146,20 @@ enum gic_intid_range {
 	__INVALID_RANGE__
 };
 
+#ifdef CONFIG_ARM64
+#include <asm/cpufeature.h>
+
+static inline bool has_v3_3_nmi(void)
+{
+	return gic_data.has_nmi && system_uses_nmi();
+}
+#else
+static inline bool has_v3_3_nmi(void)
+{
+	return false;
+}
+#endif
+
 static enum gic_intid_range __get_intid_range(irq_hw_number_t hwirq)
 {
 	switch (hwirq) {
@@ -350,6 +365,42 @@ static int gic_peek_irq(struct irq_data *d, u32 offset)
 	return !!(readl_relaxed(base + offset + (index / 32) * 4) & mask);
 }
 
+static DEFINE_RAW_SPINLOCK(irq_controller_lock);
+
+static void gic_irq_configure_nmi(struct irq_data *d, bool enable)
+{
+	void __iomem *base, *addr;
+	u32 offset, index, mask, val;
+
+	offset = convert_offset_index(d, GICD_INMIR, &index);
+	mask = 1 << (index % 32);
+
+	if (gic_irq_in_rdist(d))
+		base = gic_data_rdist_sgi_base();
+	else
+		base = gic_data.dist_base;
+
+	addr = base + offset + (index / 32) * 4;
+
+	raw_spin_lock(&irq_controller_lock);
+
+	val = readl_relaxed(addr);
+	val = enable ? (val | mask) : (val & ~mask);
+	writel_relaxed(val, addr);
+
+	raw_spin_unlock(&irq_controller_lock);
+}
+
+static void gic_irq_enable_nmi(struct irq_data *d)
+{
+	gic_irq_configure_nmi(d, true);
+}
+
+static void gic_irq_disable_nmi(struct irq_data *d)
+{
+	gic_irq_configure_nmi(d, false);
+}
+
 static void gic_poke_irq(struct irq_data *d, u32 offset)
 {
 	void __iomem *base;
@@ -395,7 +446,7 @@ static void gic_unmask_irq(struct irq_data *d)
 	gic_poke_irq(d, GICD_ISENABLER);
 }
 
-static inline bool gic_supports_nmi(void)
+static inline bool gic_supports_pseudo_nmis(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) &&
 	       static_branch_likely(&supports_pseudo_nmis);
@@ -491,7 +542,7 @@ static int gic_irq_nmi_setup(struct irq_data *d)
 {
 	struct irq_desc *desc = irq_to_desc(d->irq);
 
-	if (!gic_supports_nmi())
+	if (!gic_supports_pseudo_nmis() && !has_v3_3_nmi())
 		return -EINVAL;
 
 	if (gic_peek_irq(d, GICD_ISENABLER)) {
@@ -519,7 +570,10 @@ static int gic_irq_nmi_setup(struct irq_data *d)
 		desc->handle_irq = handle_fasteoi_nmi;
 	}
 
-	gic_irq_set_prio(d, GICD_INT_NMI_PRI);
+	if (has_v3_3_nmi())
+		gic_irq_enable_nmi(d);
+	else
+		gic_irq_set_prio(d, GICD_INT_NMI_PRI);
 
 	return 0;
 }
@@ -528,7 +582,7 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
 {
 	struct irq_desc *desc = irq_to_desc(d->irq);
 
-	if (WARN_ON(!gic_supports_nmi()))
+	if (WARN_ON(!gic_supports_pseudo_nmis() && !has_v3_3_nmi()))
 		return;
 
 	if (gic_peek_irq(d, GICD_ISENABLER)) {
@@ -554,7 +608,10 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
 		desc->handle_irq = handle_fasteoi_irq;
 	}
 
-	gic_irq_set_prio(d, GICD_INT_DEF_PRI);
+	if (has_v3_3_nmi())
+		gic_irq_disable_nmi(d);
+	else
+		gic_irq_set_prio(d, GICD_INT_DEF_PRI);
 }
 
 static void gic_eoi_irq(struct irq_data *d)
@@ -674,7 +731,7 @@ static inline void gic_complete_ack(u32 irqnr)
 
 static bool gic_rpr_is_nmi_prio(void)
 {
-	if (!gic_supports_nmi())
+	if (!gic_supports_pseudo_nmis())
 		return false;
 
 	return unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI));
@@ -706,7 +763,8 @@ static void __gic_handle_nmi(u32 irqnr, struct pt_regs *regs)
 	gic_complete_ack(irqnr);
 
 	if (generic_handle_domain_nmi(gic_data.domain, irqnr)) {
-		WARN_ONCE(true, "Unexpected pseudo-NMI (irqnr %u)\n", irqnr);
+		WARN_ONCE(true, "Unexpected %sNMI (irqnr %u)\n",
+			  gic_supports_pseudo_nmis() ? "pseudo-" : "", irqnr);
 		gic_deactivate_unhandled(irqnr);
 	}
 }
@@ -782,9 +840,37 @@ static void __gic_handle_irq_from_irqsoff(struct pt_regs *regs)
 	__gic_handle_nmi(irqnr, regs);
 }
 
+#ifdef CONFIG_ARM64
+static inline u64 gic_read_nmiar(void)
+{
+	u64 irqstat;
+
+	irqstat = read_sysreg_s(SYS_ICC_NMIAR1_EL1);
+
+	dsb(sy);
+
+	return irqstat;
+}
+
+static asmlinkage void __exception_irq_entry gic_handle_nmi_irq(struct pt_regs *regs)
+{
+	u32 irqnr = gic_read_nmiar();
+
+	__gic_handle_nmi(irqnr, regs);
+}
+
+static inline void gic_setup_nmi_handler(void)
+{
+	if (has_v3_3_nmi())
+		set_handle_nmi_irq(gic_handle_nmi_irq);
+}
+#else
+static inline void gic_setup_nmi_handler(void) { }
+#endif
+
 static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
 {
-	if (unlikely(gic_supports_nmi() && !interrupts_enabled(regs)))
+	if (unlikely(gic_supports_pseudo_nmis() && !interrupts_enabled(regs)))
 		__gic_handle_irq_from_irqsoff(regs);
 	else
 		__gic_handle_irq_from_irqson(regs);
@@ -1072,7 +1158,7 @@ static void gic_cpu_sys_reg_init(void)
 	/* Set priority mask register */
 	if (!gic_prio_masking_enabled()) {
 		write_gicreg(DEFAULT_PMR_VALUE, ICC_PMR_EL1);
-	} else if (gic_supports_nmi()) {
+	} else if (gic_supports_pseudo_nmis()) {
 		/*
 		 * Mismatch configuration with boot CPU, the system is likely
 		 * to die as interrupt masking will not work properly on all
@@ -1753,20 +1839,8 @@ static const struct gic_quirk gic_quirks[] = {
 	}
 };
 
-static void gic_enable_nmi_support(void)
+static void gic_enable_pseudo_nmis(void)
 {
-	int i;
-
-	if (!gic_prio_masking_enabled())
-		return;
-
-	ppi_nmi_refs = kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERNEL);
-	if (!ppi_nmi_refs)
-		return;
-
-	for (i = 0; i < gic_data.ppi_nr; i++)
-		refcount_set(&ppi_nmi_refs[i], 0);
-
 	/*
 	 * Linux itself doesn't use 1:N distribution, so has no need to
 	 * set PMHE. The only reason to have it set is if EL3 requires it
@@ -1809,6 +1883,28 @@ static void gic_enable_nmi_support(void)
 		static_branch_enable(&gic_nonsecure_priorities);
 
 	static_branch_enable(&supports_pseudo_nmis);
+}
+
+static void gic_enable_nmi_support(void)
+{
+	int i;
+
+	if (!gic_prio_masking_enabled() && !has_v3_3_nmi())
+		return;
+
+	ppi_nmi_refs = kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERNEL);
+	if (!ppi_nmi_refs)
+		return;
+
+	for (i = 0; i < gic_data.ppi_nr; i++)
+		refcount_set(&ppi_nmi_refs[i], 0);
+
+	/*
+	 * Initialize pseudo-NMIs only if GIC driver cannot take advantage
+	 * of core (FEAT_NMI) and GIC (FEAT_GICv3_NMI) in HW
+	 */
+	if (!has_v3_3_nmi())
+		gic_enable_pseudo_nmis();
 
 	if (static_branch_likely(&supports_deactivate_key))
 		gic_eoimode1_chip.flags |= IRQCHIP_SUPPORTS_NMI;
@@ -1872,6 +1968,7 @@ static int __init gic_init_bases(void __iomem *dist_base,
 	irq_domain_update_bus_token(gic_data.domain, DOMAIN_BUS_WIRED);
 
 	gic_data.has_rss = !!(typer & GICD_TYPER_RSS);
+	gic_data.has_nmi = !!(typer & GICD_TYPER_NMI);
 
 	if (typer & GICD_TYPER_MBIS) {
 		err = mbi_init(handle, gic_data.domain);
@@ -1881,6 +1978,8 @@ static int __init gic_init_bases(void __iomem *dist_base,
 
 	set_handle_irq(gic_handle_irq);
 
+	gic_setup_nmi_handler();
+
 	gic_update_rdist_properties();
 
 	gic_dist_init();
diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h
index 728691365464..3306456c135f 100644
--- a/include/linux/irqchip/arm-gic-v3.h
+++ b/include/linux/irqchip/arm-gic-v3.h
@@ -30,6 +30,7 @@
 #define GICD_ICFGR			0x0C00
 #define GICD_IGRPMODR			0x0D00
 #define GICD_NSACR			0x0E00
+#define GICD_INMIR			0x0F80
 #define GICD_IGROUPRnE			0x1000
 #define GICD_ISENABLERnE		0x1200
 #define GICD_ICENABLERnE		0x1400
@@ -39,6 +40,7 @@
 #define GICD_ICACTIVERnE		0x1C00
 #define GICD_IPRIORITYRnE		0x2000
 #define GICD_ICFGRnE			0x3000
+#define GICD_INMIRnE			0x3B00
 #define GICD_IROUTER			0x6000
 #define GICD_IROUTERnE			0x8000
 #define GICD_IDREGS			0xFFD0
@@ -83,6 +85,7 @@
 #define GICD_TYPER_LPIS			(1U << 17)
 #define GICD_TYPER_MBIS			(1U << 16)
 #define GICD_TYPER_ESPI			(1U << 8)
+#define GICD_TYPER_NMI			(1U << 9)
 
 #define GICD_TYPER_ID_BITS(typer)	((((typer) >> 19) & 0x1f) + 1)
 #define GICD_TYPER_NUM_LPIS(typer)	((((typer) >> 11) & 0x1f) + 1)
@@ -238,6 +241,7 @@
 #define GICR_ICFGR0			GICD_ICFGR
 #define GICR_IGRPMODR0			GICD_IGRPMODR
 #define GICR_NSACR			GICD_NSACR
+#define GICR_INMIR0			GICD_INMIR
 
 #define GICR_TYPER_PLPIS		(1U << 0)
 #define GICR_TYPER_VLPIS		(1U << 1)
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH v2 14/14] irqchip/gic-v3: Implement FEAT_GICv3_NMI support
@ 2022-11-12 15:17   ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-11-12 15:17 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Marc Zyngier
  Cc: Lorenzo Pieralisi, Mark Rutland, Sami Mujawar, Thomas Gleixner,
	linux-arm-kernel, kvmarm, Mark Brown

From: Lorenzo Pieralisi <lpieralisi@kernel.org>

The FEAT_GICv3_NMI GIC feature coupled with the CPU FEAT_NMI enables
handling NMI interrupts in HW on aarch64, by adding a superpriority
interrupt to the existing GIC priority scheme.

Implement GIC driver support for the FEAT_GICv3_NMI feature.

Rename gic_supports_nmi() helper function to gic_supports_pseudo_nmis()
to make the pseudo NMIs code path clearer and more explicit.

Check, through the ARM64 capabilitity infrastructure, if support
for FEAT_NMI was detected on the core and the system has not overridden
the detection and forced pseudo-NMIs enablement.

If FEAT_NMI is detected, it was not overridden (check embedded in the
system_uses_nmi() call) and the GIC supports the FEAT_GICv3_NMI feature,
install an NMI handler and initialize NMIs related HW GIC registers.

Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
---
 drivers/irqchip/irq-gic-v3.c       | 143 ++++++++++++++++++++++++-----
 include/linux/irqchip/arm-gic-v3.h |   4 +
 2 files changed, 125 insertions(+), 22 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 34d58567b78d..dc45e1093e7b 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -54,6 +54,7 @@ struct gic_chip_data {
 	u32			nr_redist_regions;
 	u64			flags;
 	bool			has_rss;
+	bool			has_nmi;
 	unsigned int		ppi_nr;
 	struct partition_desc	**ppi_descs;
 };
@@ -145,6 +146,20 @@ enum gic_intid_range {
 	__INVALID_RANGE__
 };
 
+#ifdef CONFIG_ARM64
+#include <asm/cpufeature.h>
+
+static inline bool has_v3_3_nmi(void)
+{
+	return gic_data.has_nmi && system_uses_nmi();
+}
+#else
+static inline bool has_v3_3_nmi(void)
+{
+	return false;
+}
+#endif
+
 static enum gic_intid_range __get_intid_range(irq_hw_number_t hwirq)
 {
 	switch (hwirq) {
@@ -350,6 +365,42 @@ static int gic_peek_irq(struct irq_data *d, u32 offset)
 	return !!(readl_relaxed(base + offset + (index / 32) * 4) & mask);
 }
 
+static DEFINE_RAW_SPINLOCK(irq_controller_lock);
+
+static void gic_irq_configure_nmi(struct irq_data *d, bool enable)
+{
+	void __iomem *base, *addr;
+	u32 offset, index, mask, val;
+
+	offset = convert_offset_index(d, GICD_INMIR, &index);
+	mask = 1 << (index % 32);
+
+	if (gic_irq_in_rdist(d))
+		base = gic_data_rdist_sgi_base();
+	else
+		base = gic_data.dist_base;
+
+	addr = base + offset + (index / 32) * 4;
+
+	raw_spin_lock(&irq_controller_lock);
+
+	val = readl_relaxed(addr);
+	val = enable ? (val | mask) : (val & ~mask);
+	writel_relaxed(val, addr);
+
+	raw_spin_unlock(&irq_controller_lock);
+}
+
+static void gic_irq_enable_nmi(struct irq_data *d)
+{
+	gic_irq_configure_nmi(d, true);
+}
+
+static void gic_irq_disable_nmi(struct irq_data *d)
+{
+	gic_irq_configure_nmi(d, false);
+}
+
 static void gic_poke_irq(struct irq_data *d, u32 offset)
 {
 	void __iomem *base;
@@ -395,7 +446,7 @@ static void gic_unmask_irq(struct irq_data *d)
 	gic_poke_irq(d, GICD_ISENABLER);
 }
 
-static inline bool gic_supports_nmi(void)
+static inline bool gic_supports_pseudo_nmis(void)
 {
 	return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) &&
 	       static_branch_likely(&supports_pseudo_nmis);
@@ -491,7 +542,7 @@ static int gic_irq_nmi_setup(struct irq_data *d)
 {
 	struct irq_desc *desc = irq_to_desc(d->irq);
 
-	if (!gic_supports_nmi())
+	if (!gic_supports_pseudo_nmis() && !has_v3_3_nmi())
 		return -EINVAL;
 
 	if (gic_peek_irq(d, GICD_ISENABLER)) {
@@ -519,7 +570,10 @@ static int gic_irq_nmi_setup(struct irq_data *d)
 		desc->handle_irq = handle_fasteoi_nmi;
 	}
 
-	gic_irq_set_prio(d, GICD_INT_NMI_PRI);
+	if (has_v3_3_nmi())
+		gic_irq_enable_nmi(d);
+	else
+		gic_irq_set_prio(d, GICD_INT_NMI_PRI);
 
 	return 0;
 }
@@ -528,7 +582,7 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
 {
 	struct irq_desc *desc = irq_to_desc(d->irq);
 
-	if (WARN_ON(!gic_supports_nmi()))
+	if (WARN_ON(!gic_supports_pseudo_nmis() && !has_v3_3_nmi()))
 		return;
 
 	if (gic_peek_irq(d, GICD_ISENABLER)) {
@@ -554,7 +608,10 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
 		desc->handle_irq = handle_fasteoi_irq;
 	}
 
-	gic_irq_set_prio(d, GICD_INT_DEF_PRI);
+	if (has_v3_3_nmi())
+		gic_irq_disable_nmi(d);
+	else
+		gic_irq_set_prio(d, GICD_INT_DEF_PRI);
 }
 
 static void gic_eoi_irq(struct irq_data *d)
@@ -674,7 +731,7 @@ static inline void gic_complete_ack(u32 irqnr)
 
 static bool gic_rpr_is_nmi_prio(void)
 {
-	if (!gic_supports_nmi())
+	if (!gic_supports_pseudo_nmis())
 		return false;
 
 	return unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI));
@@ -706,7 +763,8 @@ static void __gic_handle_nmi(u32 irqnr, struct pt_regs *regs)
 	gic_complete_ack(irqnr);
 
 	if (generic_handle_domain_nmi(gic_data.domain, irqnr)) {
-		WARN_ONCE(true, "Unexpected pseudo-NMI (irqnr %u)\n", irqnr);
+		WARN_ONCE(true, "Unexpected %sNMI (irqnr %u)\n",
+			  gic_supports_pseudo_nmis() ? "pseudo-" : "", irqnr);
 		gic_deactivate_unhandled(irqnr);
 	}
 }
@@ -782,9 +840,37 @@ static void __gic_handle_irq_from_irqsoff(struct pt_regs *regs)
 	__gic_handle_nmi(irqnr, regs);
 }
 
+#ifdef CONFIG_ARM64
+static inline u64 gic_read_nmiar(void)
+{
+	u64 irqstat;
+
+	irqstat = read_sysreg_s(SYS_ICC_NMIAR1_EL1);
+
+	dsb(sy);
+
+	return irqstat;
+}
+
+static asmlinkage void __exception_irq_entry gic_handle_nmi_irq(struct pt_regs *regs)
+{
+	u32 irqnr = gic_read_nmiar();
+
+	__gic_handle_nmi(irqnr, regs);
+}
+
+static inline void gic_setup_nmi_handler(void)
+{
+	if (has_v3_3_nmi())
+		set_handle_nmi_irq(gic_handle_nmi_irq);
+}
+#else
+static inline void gic_setup_nmi_handler(void) { }
+#endif
+
 static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
 {
-	if (unlikely(gic_supports_nmi() && !interrupts_enabled(regs)))
+	if (unlikely(gic_supports_pseudo_nmis() && !interrupts_enabled(regs)))
 		__gic_handle_irq_from_irqsoff(regs);
 	else
 		__gic_handle_irq_from_irqson(regs);
@@ -1072,7 +1158,7 @@ static void gic_cpu_sys_reg_init(void)
 	/* Set priority mask register */
 	if (!gic_prio_masking_enabled()) {
 		write_gicreg(DEFAULT_PMR_VALUE, ICC_PMR_EL1);
-	} else if (gic_supports_nmi()) {
+	} else if (gic_supports_pseudo_nmis()) {
 		/*
 		 * Mismatch configuration with boot CPU, the system is likely
 		 * to die as interrupt masking will not work properly on all
@@ -1753,20 +1839,8 @@ static const struct gic_quirk gic_quirks[] = {
 	}
 };
 
-static void gic_enable_nmi_support(void)
+static void gic_enable_pseudo_nmis(void)
 {
-	int i;
-
-	if (!gic_prio_masking_enabled())
-		return;
-
-	ppi_nmi_refs = kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERNEL);
-	if (!ppi_nmi_refs)
-		return;
-
-	for (i = 0; i < gic_data.ppi_nr; i++)
-		refcount_set(&ppi_nmi_refs[i], 0);
-
 	/*
 	 * Linux itself doesn't use 1:N distribution, so has no need to
 	 * set PMHE. The only reason to have it set is if EL3 requires it
@@ -1809,6 +1883,28 @@ static void gic_enable_nmi_support(void)
 		static_branch_enable(&gic_nonsecure_priorities);
 
 	static_branch_enable(&supports_pseudo_nmis);
+}
+
+static void gic_enable_nmi_support(void)
+{
+	int i;
+
+	if (!gic_prio_masking_enabled() && !has_v3_3_nmi())
+		return;
+
+	ppi_nmi_refs = kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERNEL);
+	if (!ppi_nmi_refs)
+		return;
+
+	for (i = 0; i < gic_data.ppi_nr; i++)
+		refcount_set(&ppi_nmi_refs[i], 0);
+
+	/*
+	 * Initialize pseudo-NMIs only if GIC driver cannot take advantage
+	 * of core (FEAT_NMI) and GIC (FEAT_GICv3_NMI) in HW
+	 */
+	if (!has_v3_3_nmi())
+		gic_enable_pseudo_nmis();
 
 	if (static_branch_likely(&supports_deactivate_key))
 		gic_eoimode1_chip.flags |= IRQCHIP_SUPPORTS_NMI;
@@ -1872,6 +1968,7 @@ static int __init gic_init_bases(void __iomem *dist_base,
 	irq_domain_update_bus_token(gic_data.domain, DOMAIN_BUS_WIRED);
 
 	gic_data.has_rss = !!(typer & GICD_TYPER_RSS);
+	gic_data.has_nmi = !!(typer & GICD_TYPER_NMI);
 
 	if (typer & GICD_TYPER_MBIS) {
 		err = mbi_init(handle, gic_data.domain);
@@ -1881,6 +1978,8 @@ static int __init gic_init_bases(void __iomem *dist_base,
 
 	set_handle_irq(gic_handle_irq);
 
+	gic_setup_nmi_handler();
+
 	gic_update_rdist_properties();
 
 	gic_dist_init();
diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h
index 728691365464..3306456c135f 100644
--- a/include/linux/irqchip/arm-gic-v3.h
+++ b/include/linux/irqchip/arm-gic-v3.h
@@ -30,6 +30,7 @@
 #define GICD_ICFGR			0x0C00
 #define GICD_IGRPMODR			0x0D00
 #define GICD_NSACR			0x0E00
+#define GICD_INMIR			0x0F80
 #define GICD_IGROUPRnE			0x1000
 #define GICD_ISENABLERnE		0x1200
 #define GICD_ICENABLERnE		0x1400
@@ -39,6 +40,7 @@
 #define GICD_ICACTIVERnE		0x1C00
 #define GICD_IPRIORITYRnE		0x2000
 #define GICD_ICFGRnE			0x3000
+#define GICD_INMIRnE			0x3B00
 #define GICD_IROUTER			0x6000
 #define GICD_IROUTERnE			0x8000
 #define GICD_IDREGS			0xFFD0
@@ -83,6 +85,7 @@
 #define GICD_TYPER_LPIS			(1U << 17)
 #define GICD_TYPER_MBIS			(1U << 16)
 #define GICD_TYPER_ESPI			(1U << 8)
+#define GICD_TYPER_NMI			(1U << 9)
 
 #define GICD_TYPER_ID_BITS(typer)	((((typer) >> 19) & 0x1f) + 1)
 #define GICD_TYPER_NUM_LPIS(typer)	((((typer) >> 11) & 0x1f) + 1)
@@ -238,6 +241,7 @@
 #define GICR_ICFGR0			GICD_ICFGR
 #define GICR_IGRPMODR0			GICD_IGRPMODR
 #define GICR_NSACR			GICD_NSACR
+#define GICR_INMIR0			GICD_INMIR
 
 #define GICR_TYPER_PLPIS		(1U << 0)
 #define GICR_TYPER_VLPIS		(1U << 1)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 00/14] arm64/nmi: Support for FEAT_NMI
  2022-11-12 15:16 ` Mark Brown
@ 2022-12-02 18:42   ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-02 18:42 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:16:54 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> This series enables the architecture and GIC support for the arm64
> FEAT_NMI and FEAT_GICv3_NMI extensions in host kernels. These introduce
> support for a new category of interrupts in the architecture code which
> we can use to provide NMI like functionality, though the interrupts are
> in fact maskable as the name would not imply. The GIC support was done
> by Loreozo Pieralisi.

FWIW, I have stashed a rework of my 2 year old vgic NMI series at
[1]. It hasn't broken non-NMI behaviour in my early testing, but my
FVP is out of commission for the time being.

What could possibly go wrong?

	M.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=arm64/nmi

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 00/14] arm64/nmi: Support for FEAT_NMI
@ 2022-12-02 18:42   ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-02 18:42 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:16:54 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> This series enables the architecture and GIC support for the arm64
> FEAT_NMI and FEAT_GICv3_NMI extensions in host kernels. These introduce
> support for a new category of interrupts in the architecture code which
> we can use to provide NMI like functionality, though the interrupts are
> in fact maskable as the name would not imply. The GIC support was done
> by Loreozo Pieralisi.

FWIW, I have stashed a rework of my 2 year old vgic NMI series at
[1]. It hasn't broken non-NMI behaviour in my early testing, but my
FVP is out of commission for the time being.

What could possibly go wrong?

	M.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=arm64/nmi

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 00/14] arm64/nmi: Support for FEAT_NMI
  2022-12-02 18:42   ` Marc Zyngier
@ 2022-12-03  8:25     ` Lorenzo Pieralisi
  -1 siblings, 0 replies; 96+ messages in thread
From: Lorenzo Pieralisi @ 2022-12-03  8:25 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Mark Brown, Catalin Marinas, Will Deacon, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Fri, Dec 02, 2022 at 06:42:20PM +0000, Marc Zyngier wrote:
> On Sat, 12 Nov 2022 15:16:54 +0000,
> Mark Brown <broonie@kernel.org> wrote:
> > 
> > This series enables the architecture and GIC support for the arm64
> > FEAT_NMI and FEAT_GICv3_NMI extensions in host kernels. These introduce
> > support for a new category of interrupts in the architecture code which
> > we can use to provide NMI like functionality, though the interrupts are
> > in fact maskable as the name would not imply. The GIC support was done
> > by Loreozo Pieralisi.
> 
> FWIW, I have stashed a rework of my 2 year old vgic NMI series at
> [1]. It hasn't broken non-NMI behaviour in my early testing, but my
> FVP is out of commission for the time being.

Thanks Marc, I will test the changes and update the ones I made
already accordingly, I should be able to do that next week I have
been sick but things are on the mend.

Lorenzo

> 
> What could possibly go wrong?
> 
> 	M.
> 
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=arm64/nmi
> 
> -- 
> Without deviation from the norm, progress is not possible.
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 00/14] arm64/nmi: Support for FEAT_NMI
@ 2022-12-03  8:25     ` Lorenzo Pieralisi
  0 siblings, 0 replies; 96+ messages in thread
From: Lorenzo Pieralisi @ 2022-12-03  8:25 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Mark Brown, Catalin Marinas, Will Deacon, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Fri, Dec 02, 2022 at 06:42:20PM +0000, Marc Zyngier wrote:
> On Sat, 12 Nov 2022 15:16:54 +0000,
> Mark Brown <broonie@kernel.org> wrote:
> > 
> > This series enables the architecture and GIC support for the arm64
> > FEAT_NMI and FEAT_GICv3_NMI extensions in host kernels. These introduce
> > support for a new category of interrupts in the architecture code which
> > we can use to provide NMI like functionality, though the interrupts are
> > in fact maskable as the name would not imply. The GIC support was done
> > by Loreozo Pieralisi.
> 
> FWIW, I have stashed a rework of my 2 year old vgic NMI series at
> [1]. It hasn't broken non-NMI behaviour in my early testing, but my
> FVP is out of commission for the time being.

Thanks Marc, I will test the changes and update the ones I made
already accordingly, I should be able to do that next week I have
been sick but things are on the mend.

Lorenzo

> 
> What could possibly go wrong?
> 
> 	M.
> 
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=arm64/nmi
> 
> -- 
> Without deviation from the norm, progress is not possible.
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 00/14] arm64/nmi: Support for FEAT_NMI
  2022-12-03  8:25     ` Lorenzo Pieralisi
@ 2022-12-03  9:45       ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-03  9:45 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Mark Brown, Catalin Marinas, Will Deacon, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 03 Dec 2022 08:25:10 +0000,
Lorenzo Pieralisi <lpieralisi@kernel.org> wrote:

Hey Lorenzo,

> 
> On Fri, Dec 02, 2022 at 06:42:20PM +0000, Marc Zyngier wrote:
> > On Sat, 12 Nov 2022 15:16:54 +0000,
> > Mark Brown <broonie@kernel.org> wrote:
> > > 
> > > This series enables the architecture and GIC support for the arm64
> > > FEAT_NMI and FEAT_GICv3_NMI extensions in host kernels. These introduce
> > > support for a new category of interrupts in the architecture code which
> > > we can use to provide NMI like functionality, though the interrupts are
> > > in fact maskable as the name would not imply. The GIC support was done
> > > by Loreozo Pieralisi.
> > 
> > FWIW, I have stashed a rework of my 2 year old vgic NMI series at
> > [1]. It hasn't broken non-NMI behaviour in my early testing, but my
> > FVP is out of commission for the time being.
> 
> Thanks Marc, I will test the changes and update the ones I made
> already accordingly, I should be able to do that next week I have
> been sick but things are on the mend.

I hope you get better soon!

Let's come up with the best of both series (I'm pretty sure mine isn't
perfect...).

I also need to reply to Mark's patches, as there is a number of things
I ended up changing in my patches and that should be part of the base
NMI series (bogus trapping and restrictive cap being two main two).

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 00/14] arm64/nmi: Support for FEAT_NMI
@ 2022-12-03  9:45       ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-03  9:45 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Mark Brown, Catalin Marinas, Will Deacon, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 03 Dec 2022 08:25:10 +0000,
Lorenzo Pieralisi <lpieralisi@kernel.org> wrote:

Hey Lorenzo,

> 
> On Fri, Dec 02, 2022 at 06:42:20PM +0000, Marc Zyngier wrote:
> > On Sat, 12 Nov 2022 15:16:54 +0000,
> > Mark Brown <broonie@kernel.org> wrote:
> > > 
> > > This series enables the architecture and GIC support for the arm64
> > > FEAT_NMI and FEAT_GICv3_NMI extensions in host kernels. These introduce
> > > support for a new category of interrupts in the architecture code which
> > > we can use to provide NMI like functionality, though the interrupts are
> > > in fact maskable as the name would not imply. The GIC support was done
> > > by Loreozo Pieralisi.
> > 
> > FWIW, I have stashed a rework of my 2 year old vgic NMI series at
> > [1]. It hasn't broken non-NMI behaviour in my early testing, but my
> > FVP is out of commission for the time being.
> 
> Thanks Marc, I will test the changes and update the ones I made
> already accordingly, I should be able to do that next week I have
> been sick but things are on the mend.

I hope you get better soon!

Let's come up with the best of both series (I'm pretty sure mine isn't
perfect...).

I also need to reply to Mark's patches, as there is a number of things
I ended up changing in my patches and that should be part of the base
NMI series (bogus trapping and restrictive cap being two main two).

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 04/14] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
  2022-11-12 15:16   ` Mark Brown
@ 2022-12-05 16:38     ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 16:38 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:16:58 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> Encodings are provided for ALLINT which allow setting of ALLINT.ALLINT
> using an immediate rather than requiring that a register be loaded with
> the value to write. Since these don't currently fit within the scheme we
> have for sysreg generation add manual encodings like we currently do for
> other similar registers such as SVCR.
> 
> Since it is required that these immediate versions be encoded with xzr
> as the source register provide asm wrapper which ensure this is the
> case.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/daifflags.h |  1 +
>  arch/arm64/include/asm/nmi.h       | 18 ++++++++++++++++++
>  arch/arm64/include/asm/sysreg.h    |  2 ++
>  3 files changed, 21 insertions(+)
>  create mode 100644 arch/arm64/include/asm/nmi.h
> 
> diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
> index 55f57dfa8e2f..b3bed2004342 100644
> --- a/arch/arm64/include/asm/daifflags.h
> +++ b/arch/arm64/include/asm/daifflags.h
> @@ -141,4 +141,5 @@ static inline void local_daif_inherit(struct pt_regs *regs)
>  	 */
>  	write_sysreg(flags, daif);
>  }
> +
>  #endif

Spurious change?

> diff --git a/arch/arm64/include/asm/nmi.h b/arch/arm64/include/asm/nmi.h
> new file mode 100644
> index 000000000000..067e2554e144
> --- /dev/null
> +++ b/arch/arm64/include/asm/nmi.h
> @@ -0,0 +1,18 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2022 ARM Ltd.
> + */
> +#ifndef __ASM_NMI_H
> +#define __ASM_NMI_H
> +
> +static __always_inline void _allint_clear(void)
> +{
> +	asm volatile(__msr_s(SYS_ALLINT_CLR, "xzr"));
> +}
> +
> +static __always_inline void _allint_set(void)
> +{
> +	asm volatile(__msr_s(SYS_ALLINT_SET, "xzr"));
> +}
> +
> +#endif

If this *really* must be a separate include file, it should at least
directly include its dependencies. My gut feeling is that it would be
better placed in daiflags.h.

> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 7d301700d1a9..0c07b740c750 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -126,6 +126,8 @@
>   * System registers, organised loosely by encoding but grouped together
>   * where the architected name contains an index. e.g. ID_MMFR<n>_EL1.
>   */
> +#define SYS_ALLINT_CLR			sys_reg(0, 1, 4, 0, 0)
> +#define SYS_ALLINT_SET			sys_reg(0, 1, 4, 1, 0)

This only covers the immediate versions of ALLINT, and misses the
definition for the register version, aka sys_reg(3, 0, 4, 3, 0).

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 04/14] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
@ 2022-12-05 16:38     ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 16:38 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:16:58 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> Encodings are provided for ALLINT which allow setting of ALLINT.ALLINT
> using an immediate rather than requiring that a register be loaded with
> the value to write. Since these don't currently fit within the scheme we
> have for sysreg generation add manual encodings like we currently do for
> other similar registers such as SVCR.
> 
> Since it is required that these immediate versions be encoded with xzr
> as the source register provide asm wrapper which ensure this is the
> case.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/daifflags.h |  1 +
>  arch/arm64/include/asm/nmi.h       | 18 ++++++++++++++++++
>  arch/arm64/include/asm/sysreg.h    |  2 ++
>  3 files changed, 21 insertions(+)
>  create mode 100644 arch/arm64/include/asm/nmi.h
> 
> diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
> index 55f57dfa8e2f..b3bed2004342 100644
> --- a/arch/arm64/include/asm/daifflags.h
> +++ b/arch/arm64/include/asm/daifflags.h
> @@ -141,4 +141,5 @@ static inline void local_daif_inherit(struct pt_regs *regs)
>  	 */
>  	write_sysreg(flags, daif);
>  }
> +
>  #endif

Spurious change?

> diff --git a/arch/arm64/include/asm/nmi.h b/arch/arm64/include/asm/nmi.h
> new file mode 100644
> index 000000000000..067e2554e144
> --- /dev/null
> +++ b/arch/arm64/include/asm/nmi.h
> @@ -0,0 +1,18 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2022 ARM Ltd.
> + */
> +#ifndef __ASM_NMI_H
> +#define __ASM_NMI_H
> +
> +static __always_inline void _allint_clear(void)
> +{
> +	asm volatile(__msr_s(SYS_ALLINT_CLR, "xzr"));
> +}
> +
> +static __always_inline void _allint_set(void)
> +{
> +	asm volatile(__msr_s(SYS_ALLINT_SET, "xzr"));
> +}
> +
> +#endif

If this *really* must be a separate include file, it should at least
directly include its dependencies. My gut feeling is that it would be
better placed in daiflags.h.

> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 7d301700d1a9..0c07b740c750 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -126,6 +126,8 @@
>   * System registers, organised loosely by encoding but grouped together
>   * where the architected name contains an index. e.g. ID_MMFR<n>_EL1.
>   */
> +#define SYS_ALLINT_CLR			sys_reg(0, 1, 4, 0, 0)
> +#define SYS_ALLINT_SET			sys_reg(0, 1, 4, 1, 0)

This only covers the immediate versions of ALLINT, and misses the
definition for the register version, aka sys_reg(3, 0, 4, 3, 0).

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 03/14] arm64/sysreg: Add definition of ISR_EL1
  2022-11-12 15:16   ` Mark Brown
@ 2022-12-05 16:45     ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 16:45 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:16:57 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> Add a definition of ISR_EL1 as per DDI0487I.a. This register was not
> previously defined in sysreg.h, no functional changes.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/tools/sysreg | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index 5d0d2498c635..3660e680b7f5 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -1079,6 +1079,16 @@ Res0	15:8
>  Field	7:0	LR
>  EndSysreg
>  
> +Sysreg	ISR_EL1	3	0	12	1	0
> +Res0	63:11
> +Field	10	IS
> +Field	9	FS
> +Field	8	A

You might as well make use of this field in arch/arm64/kvm/hyp/entry.S

> +Field	7	I
> +Field	6	F
> +Res0	5:0
> +EndSysreg
> +
>  Sysreg	ICC_NMIAR1_EL1	3	0	12	9	5
>  Res0	63:24
>  Field	23:0	INTID

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 03/14] arm64/sysreg: Add definition of ISR_EL1
@ 2022-12-05 16:45     ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 16:45 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:16:57 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> Add a definition of ISR_EL1 as per DDI0487I.a. This register was not
> previously defined in sysreg.h, no functional changes.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/tools/sysreg | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index 5d0d2498c635..3660e680b7f5 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -1079,6 +1079,16 @@ Res0	15:8
>  Field	7:0	LR
>  EndSysreg
>  
> +Sysreg	ISR_EL1	3	0	12	1	0
> +Res0	63:11
> +Field	10	IS
> +Field	9	FS
> +Field	8	A

You might as well make use of this field in arch/arm64/kvm/hyp/entry.S

> +Field	7	I
> +Field	6	F
> +Res0	5:0
> +EndSysreg
> +
>  Sysreg	ICC_NMIAR1_EL1	3	0	12	9	5
>  Res0	63:24
>  Field	23:0	INTID

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 04/14] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
  2022-12-05 16:38     ` Marc Zyngier
@ 2022-12-05 17:11       ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-05 17:11 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm


[-- Attachment #1.1: Type: text/plain, Size: 1487 bytes --]

On Mon, Dec 05, 2022 at 04:38:53PM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:

> >  }
> > +
> >  #endif

> Spurious change?

Yes.

> > +++ b/arch/arm64/include/asm/nmi.h
> > @@ -0,0 +1,18 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */

> > +static __always_inline void _allint_set(void)
> > +{
> > +	asm volatile(__msr_s(SYS_ALLINT_SET, "xzr"));
> > +}

> If this *really* must be a separate include file, it should at least
> directly include its dependencies. My gut feeling is that it would be
> better placed in daiflags.h.

Yeah, I was swithering on that.  Some versions of the code have had more
in here at which point having the separate header made more sense.  I
think part of the problem here is that we should do some combination of
renaming daifflags.h or layering an a more abstracted API on top of it,
putting things that are not DAIF into daifflags.h doesn't feel great.

> > @@ -126,6 +126,8 @@
> >   * System registers, organised loosely by encoding but grouped together
> >   * where the architected name contains an index. e.g. ID_MMFR<n>_EL1.
> >   */
> > +#define SYS_ALLINT_CLR			sys_reg(0, 1, 4, 0, 0)
> > +#define SYS_ALLINT_SET			sys_reg(0, 1, 4, 1, 0)

> This only covers the immediate versions of ALLINT, and misses the
> definition for the register version, aka sys_reg(3, 0, 4, 3, 0).

That is already present upstream, we only need to add the immediate
versions which the generated header stuff doesn't have any model for
yet.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 04/14] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
@ 2022-12-05 17:11       ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-05 17:11 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

[-- Attachment #1: Type: text/plain, Size: 1487 bytes --]

On Mon, Dec 05, 2022 at 04:38:53PM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:

> >  }
> > +
> >  #endif

> Spurious change?

Yes.

> > +++ b/arch/arm64/include/asm/nmi.h
> > @@ -0,0 +1,18 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */

> > +static __always_inline void _allint_set(void)
> > +{
> > +	asm volatile(__msr_s(SYS_ALLINT_SET, "xzr"));
> > +}

> If this *really* must be a separate include file, it should at least
> directly include its dependencies. My gut feeling is that it would be
> better placed in daiflags.h.

Yeah, I was swithering on that.  Some versions of the code have had more
in here at which point having the separate header made more sense.  I
think part of the problem here is that we should do some combination of
renaming daifflags.h or layering an a more abstracted API on top of it,
putting things that are not DAIF into daifflags.h doesn't feel great.

> > @@ -126,6 +126,8 @@
> >   * System registers, organised loosely by encoding but grouped together
> >   * where the architected name contains an index. e.g. ID_MMFR<n>_EL1.
> >   */
> > +#define SYS_ALLINT_CLR			sys_reg(0, 1, 4, 0, 0)
> > +#define SYS_ALLINT_SET			sys_reg(0, 1, 4, 1, 0)

> This only covers the immediate versions of ALLINT, and misses the
> definition for the register version, aka sys_reg(3, 0, 4, 3, 0).

That is already present upstream, we only need to add the immediate
versions which the generated header stuff doesn't have any model for
yet.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 05/14] arm64/asm: Introduce assembly macros for managing ALLINT
  2022-11-12 15:16   ` Mark Brown
@ 2022-12-05 17:29     ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 17:29 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:16:59 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> In order to allow assembly code to ensure that not even superpriorty
> interrupts can preempt it provide macros for enabling and disabling
                        ^
                        \ Insert comma here

> ALLINT.ALLINT.  This is not integrated into the existing DAIF macros
> since we do not always wish to manage ALLINT along with DAIF and the
                                                              ^
                                            Insert comma here /

> use of DAIF in the naming of the existing macros might lead to surprises
> if ALLINT is also managed.

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 05/14] arm64/asm: Introduce assembly macros for managing ALLINT
@ 2022-12-05 17:29     ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 17:29 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:16:59 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> In order to allow assembly code to ensure that not even superpriorty
> interrupts can preempt it provide macros for enabling and disabling
                        ^
                        \ Insert comma here

> ALLINT.ALLINT.  This is not integrated into the existing DAIF macros
> since we do not always wish to manage ALLINT along with DAIF and the
                                                              ^
                                            Insert comma here /

> use of DAIF in the naming of the existing macros might lead to surprises
> if ALLINT is also managed.

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 06/14] arm64/hyp-stub: Enable access to ALLINT
  2022-11-12 15:17   ` Mark Brown
@ 2022-12-05 17:50     ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 17:50 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:17:00 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> In order to use NMIs we need to ensure that traps are disabled for it so
> update HCRX_EL2 to ensure that TALLINT is not set when we detect support
> for NMIs.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/kernel/hyp-stub.S | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> index 2ee18c860f2a..4e0b06467973 100644
> --- a/arch/arm64/kernel/hyp-stub.S
> +++ b/arch/arm64/kernel/hyp-stub.S
> @@ -151,6 +151,18 @@ SYM_CODE_START_LOCAL(__finalise_el2)
>  
>  .Lskip_sme:
>  
> +	// NMIs
> +	__check_override id_aa64pfr1 ID_AA64PFR1_EL1_NMI_SHIFT 4 .Linit_nmi .Lskip_nmi

Err.., What makes sure that x1 contains id_aa64pfr1_el1, as per the
big fat warning at the beginning of the file?? If you have SME, x1
will contain some other junk... Really, this must be written as:

	check_override id_aa64pfr1 ID_AA64PFR1_EL1_NMI_SHIFT .Linit_nmi .Lskip_nmi

> +.Linit_nmi:
> +	mrs	x1, id_aa64mmfr1_el1		// HCRX_EL2 present?
> +	ubfx	x1, x1, #ID_AA64MMFR1_EL1_HCX_SHIFT, #4
> +	cbz	x1, .Lskip_nmi
> +
> +	mrs_s	x1, SYS_HCRX_EL2
> +	and	x1, x1, #~HCRX_EL2_TALLINT_MASK	// Don't trap ALLINT

A nicer way of writing this is:

	bic	x1, x1, #HCRX_EL2_TALLINT_MASK

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 06/14] arm64/hyp-stub: Enable access to ALLINT
@ 2022-12-05 17:50     ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 17:50 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:17:00 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> In order to use NMIs we need to ensure that traps are disabled for it so
> update HCRX_EL2 to ensure that TALLINT is not set when we detect support
> for NMIs.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/kernel/hyp-stub.S | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> index 2ee18c860f2a..4e0b06467973 100644
> --- a/arch/arm64/kernel/hyp-stub.S
> +++ b/arch/arm64/kernel/hyp-stub.S
> @@ -151,6 +151,18 @@ SYM_CODE_START_LOCAL(__finalise_el2)
>  
>  .Lskip_sme:
>  
> +	// NMIs
> +	__check_override id_aa64pfr1 ID_AA64PFR1_EL1_NMI_SHIFT 4 .Linit_nmi .Lskip_nmi

Err.., What makes sure that x1 contains id_aa64pfr1_el1, as per the
big fat warning at the beginning of the file?? If you have SME, x1
will contain some other junk... Really, this must be written as:

	check_override id_aa64pfr1 ID_AA64PFR1_EL1_NMI_SHIFT .Linit_nmi .Lskip_nmi

> +.Linit_nmi:
> +	mrs	x1, id_aa64mmfr1_el1		// HCRX_EL2 present?
> +	ubfx	x1, x1, #ID_AA64MMFR1_EL1_HCX_SHIFT, #4
> +	cbz	x1, .Lskip_nmi
> +
> +	mrs_s	x1, SYS_HCRX_EL2
> +	and	x1, x1, #~HCRX_EL2_TALLINT_MASK	// Don't trap ALLINT

A nicer way of writing this is:

	bic	x1, x1, #HCRX_EL2_TALLINT_MASK

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 08/14] arm64/cpufeature: Detect PE support for FEAT_NMI
  2022-11-12 15:17   ` Mark Brown
@ 2022-12-05 18:03     ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 18:03 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:17:02 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> Use of FEAT_NMI requires that all the PEs in the system and the GIC have NMI
> support. This patch implements the PE part of that detection.
> 
> In order to avoid problematic interactions between real and pseudo NMIs
> we disable the architected feature if the user has enabled pseudo NMIs
> on the command line. If this is done on a system where support for the
> architected feature is detected then a warning is printed during boot in
> order to help users spot what is likely to be a misconfiguration.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/cpufeature.h |  6 ++++
>  arch/arm64/kernel/cpufeature.c      | 55 ++++++++++++++++++++++++++++-
>  arch/arm64/tools/cpucaps            |  1 +
>  3 files changed, 61 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index f73f11b55042..85eeb331a0ef 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -809,6 +809,12 @@ static __always_inline bool system_uses_irq_prio_masking(void)
>  	       cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
>  }
>  
> +static __always_inline bool system_uses_nmi(void)
> +{
> +	return IS_ENABLED(CONFIG_ARM64_NMI) &&
> +		cpus_have_const_cap(ARM64_HAS_NMI);
> +}
> +
>  static inline bool system_supports_mte(void)
>  {
>  	return IS_ENABLED(CONFIG_ARM64_MTE) &&
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 6062454a9067..18ab50b76f50 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -84,6 +84,7 @@
>  #include <asm/kvm_host.h>
>  #include <asm/mmu_context.h>
>  #include <asm/mte.h>
> +#include <asm/nmi.h>
>  #include <asm/processor.h>
>  #include <asm/smp.h>
>  #include <asm/sysreg.h>
> @@ -243,6 +244,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
>  };
>  
>  static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_NMI_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
>  		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SME_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MPAM_frac_SHIFT, 4, 0),
> @@ -2008,9 +2010,11 @@ static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap)
>  }
>  #endif /* CONFIG_ARM64_E0PD */
>  
> -#ifdef CONFIG_ARM64_PSEUDO_NMI
> +#if IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) || IS_ENABLED(CONFIG_ARM64_NMI)
>  static bool enable_pseudo_nmi;
> +#endif
>  
> +#ifdef CONFIG_ARM64_PSEUDO_NMI
>  static int __init early_enable_pseudo_nmi(char *p)
>  {
>  	return strtobool(p, &enable_pseudo_nmi);
> @@ -2024,6 +2028,41 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
>  }
>  #endif
>  
> +#ifdef CONFIG_ARM64_NMI
> +static bool has_nmi(const struct arm64_cpu_capabilities *entry, int scope)
> +{
> +	if (!has_cpuid_feature(entry, scope))
> +		return false;
> +
> +	/*
> +	 * Having both real and pseudo NMIs enabled simultaneously is
> +	 * likely to cause confusion.  Since pseudo NMIs must be
> +	 * enabled with an explicit command line option, if the user
> +	 * has set that option on a system with real NMIs for some
> +	 * reason assume they know what they're doing.
> +	 */
> +	if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && enable_pseudo_nmi) {
> +		pr_info("Pseudo NMI enabled, not using architected NMI\n");
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
> +static void nmi_enable(const struct arm64_cpu_capabilities *__unused)
> +{
> +	/*
> +	 * Enable use of NMIs controlled by ALLINT, SPINTMASK should
> +	 * be clear by default but make it explicit that we are using
> +	 * this mode.  Ensure that ALLINT is clear first in order to
> +	 * avoid leaving things masked.
> +	 */
> +	_allint_clear();
> +	sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPINTMASK, SCTLR_EL1_NMI);
> +	isb();
> +}
> +#endif
> +
>  #ifdef CONFIG_ARM64_BTI
>  static void bti_enable(const struct arm64_cpu_capabilities *__unused)
>  {
> @@ -2640,6 +2679,20 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.matches = has_cpuid_feature,
>  		.cpu_enable = cpu_trap_el0_impdef,
>  	},
> +#ifdef CONFIG_ARM64_NMI
> +	{
> +		.desc = "Non-maskable Interrupts",
> +		.capability = ARM64_HAS_NMI,
> +		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,

PSEUDO_NMI uses ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE. What is the
rational for using a different policy here?

> +		.sys_reg = SYS_ID_AA64PFR1_EL1,
> +		.sign = FTR_UNSIGNED,
> +		.field_pos = ID_AA64PFR1_EL1_NMI_SHIFT,
> +		.field_width = 4,
> +		.min_field_value = ID_AA64PFR1_EL1_NMI_IMP,
> +		.matches = has_nmi,
> +		.cpu_enable = nmi_enable,
> +	},
> +#endif
>  	{},
>  };

The whole thing is way too restrictive: KVM definitely needs to know
that the feature exists, even if there is no use for it in the host
kernel. There is no reason why guests shouldn't be able to use this
even if the host doesn't care about it.

Which means you need two properties: one that advertises the
availability of the feature, and one that makes use of it in the
kernel.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 08/14] arm64/cpufeature: Detect PE support for FEAT_NMI
@ 2022-12-05 18:03     ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 18:03 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:17:02 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> Use of FEAT_NMI requires that all the PEs in the system and the GIC have NMI
> support. This patch implements the PE part of that detection.
> 
> In order to avoid problematic interactions between real and pseudo NMIs
> we disable the architected feature if the user has enabled pseudo NMIs
> on the command line. If this is done on a system where support for the
> architected feature is detected then a warning is printed during boot in
> order to help users spot what is likely to be a misconfiguration.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/cpufeature.h |  6 ++++
>  arch/arm64/kernel/cpufeature.c      | 55 ++++++++++++++++++++++++++++-
>  arch/arm64/tools/cpucaps            |  1 +
>  3 files changed, 61 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index f73f11b55042..85eeb331a0ef 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -809,6 +809,12 @@ static __always_inline bool system_uses_irq_prio_masking(void)
>  	       cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
>  }
>  
> +static __always_inline bool system_uses_nmi(void)
> +{
> +	return IS_ENABLED(CONFIG_ARM64_NMI) &&
> +		cpus_have_const_cap(ARM64_HAS_NMI);
> +}
> +
>  static inline bool system_supports_mte(void)
>  {
>  	return IS_ENABLED(CONFIG_ARM64_MTE) &&
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 6062454a9067..18ab50b76f50 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -84,6 +84,7 @@
>  #include <asm/kvm_host.h>
>  #include <asm/mmu_context.h>
>  #include <asm/mte.h>
> +#include <asm/nmi.h>
>  #include <asm/processor.h>
>  #include <asm/smp.h>
>  #include <asm/sysreg.h>
> @@ -243,6 +244,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
>  };
>  
>  static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_NMI_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
>  		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SME_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MPAM_frac_SHIFT, 4, 0),
> @@ -2008,9 +2010,11 @@ static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap)
>  }
>  #endif /* CONFIG_ARM64_E0PD */
>  
> -#ifdef CONFIG_ARM64_PSEUDO_NMI
> +#if IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) || IS_ENABLED(CONFIG_ARM64_NMI)
>  static bool enable_pseudo_nmi;
> +#endif
>  
> +#ifdef CONFIG_ARM64_PSEUDO_NMI
>  static int __init early_enable_pseudo_nmi(char *p)
>  {
>  	return strtobool(p, &enable_pseudo_nmi);
> @@ -2024,6 +2028,41 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry,
>  }
>  #endif
>  
> +#ifdef CONFIG_ARM64_NMI
> +static bool has_nmi(const struct arm64_cpu_capabilities *entry, int scope)
> +{
> +	if (!has_cpuid_feature(entry, scope))
> +		return false;
> +
> +	/*
> +	 * Having both real and pseudo NMIs enabled simultaneously is
> +	 * likely to cause confusion.  Since pseudo NMIs must be
> +	 * enabled with an explicit command line option, if the user
> +	 * has set that option on a system with real NMIs for some
> +	 * reason assume they know what they're doing.
> +	 */
> +	if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && enable_pseudo_nmi) {
> +		pr_info("Pseudo NMI enabled, not using architected NMI\n");
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
> +static void nmi_enable(const struct arm64_cpu_capabilities *__unused)
> +{
> +	/*
> +	 * Enable use of NMIs controlled by ALLINT, SPINTMASK should
> +	 * be clear by default but make it explicit that we are using
> +	 * this mode.  Ensure that ALLINT is clear first in order to
> +	 * avoid leaving things masked.
> +	 */
> +	_allint_clear();
> +	sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPINTMASK, SCTLR_EL1_NMI);
> +	isb();
> +}
> +#endif
> +
>  #ifdef CONFIG_ARM64_BTI
>  static void bti_enable(const struct arm64_cpu_capabilities *__unused)
>  {
> @@ -2640,6 +2679,20 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.matches = has_cpuid_feature,
>  		.cpu_enable = cpu_trap_el0_impdef,
>  	},
> +#ifdef CONFIG_ARM64_NMI
> +	{
> +		.desc = "Non-maskable Interrupts",
> +		.capability = ARM64_HAS_NMI,
> +		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,

PSEUDO_NMI uses ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE. What is the
rational for using a different policy here?

> +		.sys_reg = SYS_ID_AA64PFR1_EL1,
> +		.sign = FTR_UNSIGNED,
> +		.field_pos = ID_AA64PFR1_EL1_NMI_SHIFT,
> +		.field_width = 4,
> +		.min_field_value = ID_AA64PFR1_EL1_NMI_IMP,
> +		.matches = has_nmi,
> +		.cpu_enable = nmi_enable,
> +	},
> +#endif
>  	{},
>  };

The whole thing is way too restrictive: KVM definitely needs to know
that the feature exists, even if there is no use for it in the host
kernel. There is no reason why guests shouldn't be able to use this
even if the host doesn't care about it.

Which means you need two properties: one that advertises the
availability of the feature, and one that makes use of it in the
kernel.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 09/14] KVM: arm64: Hide FEAT_NMI from guests
  2022-11-12 15:17   ` Mark Brown
@ 2022-12-05 18:06     ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 18:06 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:17:03 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> FEAT_NMI is not yet useful to guests pending implementation of vGIC
> support. Mask out the feature from the ID register and prevent guests
> creating state in ALLINT.ALLINT by activating the trap on write provided
> in HCRX_EL2.TALLINT when they are running. There is no trap available
> for reads from ALLINT.
> 
> We do not need to check for FEAT_HCRX since it is mandatory since v8.7
> and FEAT_NMI is a v8.8 feature.

And yet you check for it in hyp-stub.S after having checked for
FEAT_NMI. What gives?

> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 6 ++++++
>  arch/arm64/kvm/sys_regs.c               | 1 +
>  2 files changed, 7 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 6cbbb6c02f66..89e78c4e5cce 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -85,6 +85,9 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
>  		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
>  	}
>  
> +	if (cpus_have_final_cap(ARM64_HAS_NMI))
> +		sysreg_clear_set_s(SYS_HCRX_EL2, 0, HCRX_EL2_TALLINT);
> +

Crucially, this is missing a handler for the trap, resulting in a
large splat once a guest accesses ALLINT.

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 09/14] KVM: arm64: Hide FEAT_NMI from guests
@ 2022-12-05 18:06     ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 18:06 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:17:03 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> FEAT_NMI is not yet useful to guests pending implementation of vGIC
> support. Mask out the feature from the ID register and prevent guests
> creating state in ALLINT.ALLINT by activating the trap on write provided
> in HCRX_EL2.TALLINT when they are running. There is no trap available
> for reads from ALLINT.
> 
> We do not need to check for FEAT_HCRX since it is mandatory since v8.7
> and FEAT_NMI is a v8.8 feature.

And yet you check for it in hyp-stub.S after having checked for
FEAT_NMI. What gives?

> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 6 ++++++
>  arch/arm64/kvm/sys_regs.c               | 1 +
>  2 files changed, 7 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 6cbbb6c02f66..89e78c4e5cce 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -85,6 +85,9 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
>  		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
>  	}
>  
> +	if (cpus_have_final_cap(ARM64_HAS_NMI))
> +		sysreg_clear_set_s(SYS_HCRX_EL2, 0, HCRX_EL2_TALLINT);
> +

Crucially, this is missing a handler for the trap, resulting in a
large splat once a guest accesses ALLINT.

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 05/14] arm64/asm: Introduce assembly macros for managing ALLINT
  2022-12-05 17:29     ` Marc Zyngier
@ 2022-12-05 18:24       ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-05 18:24 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm


[-- Attachment #1.1: Type: text/plain, Size: 464 bytes --]

On Mon, Dec 05, 2022 at 05:29:54PM +0000, Marc Zyngier wrote:
> On Sat, 12 Nov 2022 15:16:59 +0000,
> Mark Brown <broonie@kernel.org> wrote:

> > In order to allow assembly code to ensure that not even superpriorty
> > interrupts can preempt it provide macros for enabling and disabling
>                         ^
>                         \ Insert comma here

That would give ...not even superpriority interrupts can preempt, it"
which doesn't make sense to me?

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 05/14] arm64/asm: Introduce assembly macros for managing ALLINT
@ 2022-12-05 18:24       ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-05 18:24 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

[-- Attachment #1: Type: text/plain, Size: 464 bytes --]

On Mon, Dec 05, 2022 at 05:29:54PM +0000, Marc Zyngier wrote:
> On Sat, 12 Nov 2022 15:16:59 +0000,
> Mark Brown <broonie@kernel.org> wrote:

> > In order to allow assembly code to ensure that not even superpriorty
> > interrupts can preempt it provide macros for enabling and disabling
>                         ^
>                         \ Insert comma here

That would give ...not even superpriority interrupts can preempt, it"
which doesn't make sense to me?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
  2022-11-12 15:17   ` Mark Brown
@ 2022-12-05 18:47     ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 18:47 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:17:04 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> As we do for pseudo NMIs add code to our DAIF management which keeps
                          ,
> superpriority interrupts unmasked when we have asynchronous exceptions
> enabled. Since superpriority interrupts are not masked through DAIF like
> pseduo NMIs are we also need to modify the assembler macros for managing
                 ,

> DAIF to ensure that the masking is done in the assembly code. At present
> users of the assembly macros always mask pseudo NMIs.

In patch #5, you say:

"This is not integrated into the existing DAIF macros since we do not
always wish to manage ALLINT along with DAIF and the use of DAIF in
the naming of the existing macros might lead to surprises if ALLINT is
also managed."

It isn't integrated, and yet it is.

> 
> There is a difference to the actual handling between pseudo NMIs
> and superpriority interrupts in the assembly save_and_disable_irq and
> restore_irq macros, these cover both interrupts and FIQs using DAIF

s/,/;/

> without regard for the use of pseudo NMIs so also mask those but are not
                                                              ,

> updated here to mask superpriority interrupts. Given the names it is not
> clear that the behaviour with pseudo NMIs is particularly intentional,

Pseudo-NMIs are still compatible with the standard DAIF behaviour,
where setting PSTATE.I is strictly equivalent to setting PSTATE.ALLINT
when you have architected NMIs.

So I don't really understand your concern here.

> and in any case these macros are only used in the implementation of
> alternatives for software PAN while hardware PAN has been mandatory
> since v8.1 so it is not anticipated that practical systems with support
            ,

> for FEAT_NMI will ever execute the affected code.
> 
> This should be a conservative set of masked regions, we may be able to
> relax this in future, but this should represent a good starting point.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/assembler.h | 11 +++++++++++
>  arch/arm64/include/asm/daifflags.h | 18 ++++++++++++++++++
>  2 files changed, 29 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 88d9779a83c0..e85a7e9af9ae 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -52,19 +52,30 @@ alternative_else_nop_endif
>  
>  	.macro save_and_disable_daif, flags
>  	mrs	\flags, daif
> +        disable_allint
>  	msr	daifset, #0xf
>  	.endm
>  
>  	.macro disable_daif
> +        disable_allint
>  	msr	daifset, #0xf
>  	.endm
>  
>  	.macro enable_daif
>  	msr	daifclr, #0xf
> +	enable_allint
>  	.endm
>  
>  	.macro	restore_daif, flags:req
>  	msr	daif, \flags
> +#ifdef CONFIG_ARM64_NMI
> +alternative_if ARM64_HAS_NMI
> +	/* If async exceptions are unmasked we can take NMIs */
> +	tbnz	\flags, #8, 2004f
> +	msr_s	SYS_ALLINT_CLR, xzr
> +2004:

Please use the usual blah\@ hack for these macros, as you have no
control over the context in which they will expand.

> +alternative_else_nop_endif
> +#endif
>  	.endm
>  
>  	/* IRQ/FIQ are the lowest priority flags, unconditionally unmask the rest. */
> diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
> index b3bed2004342..fda73976068f 100644
> --- a/arch/arm64/include/asm/daifflags.h
> +++ b/arch/arm64/include/asm/daifflags.h
> @@ -10,6 +10,7 @@
>  #include <asm/arch_gicv3.h>
>  #include <asm/barrier.h>
>  #include <asm/cpufeature.h>
> +#include <asm/nmi.h>
>  #include <asm/ptrace.h>
>  
>  #define DAIF_PROCCTX		0
> @@ -35,6 +36,9 @@ static inline void local_daif_mask(void)
>  	if (system_uses_irq_prio_masking())
>  		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
>  
> +	if (system_uses_nmi())
> +		_allint_set();
> +
>  	trace_hardirqs_off();
>  }
>  
> @@ -50,6 +54,12 @@ static inline unsigned long local_daif_save_flags(void)
>  			flags |= PSR_I_BIT | PSR_F_BIT;
>  	}
>  
> +	if (system_uses_nmi()) {
> +		/* If IRQs are masked with ALLINT, reflect in in the flags */
> +		if (read_sysreg_s(SYS_ALLINT) & ALLINT_ALLINT)
> +			flags |= PSR_I_BIT | PSR_F_BIT;
> +	}
> +
>  	return flags;
>  }
>  
> @@ -114,6 +124,10 @@ static inline void local_daif_restore(unsigned long flags)
>  		gic_write_pmr(pmr);
>  	}
>  
> +	/* If we can take asynchronous errors we can take NMIs */
> +	if (system_uses_nmi() && !(flags & PSR_A_BIT))
> +		_allint_clear();
> +
>  	write_sysreg(flags, daif);

This sequence feels odd. With pseudo-NMI, we only allow the NMI to
fire *after* the write to DAIF. With architected NMI, a NMI can fire
*before*. I think that for the time being, we should follow a similar
ordering.

>  
>  	if (irq_disabled)
> @@ -131,6 +145,10 @@ static inline void local_daif_inherit(struct pt_regs *regs)
>  	if (interrupts_enabled(regs))
>  		trace_hardirqs_on();
>  
> +	/* If we can take asynchronous errors we can take NMIs */
> +	if (system_uses_nmi() && !(flags & PSR_A_BIT))
> +		_allint_clear();
> +

Same remark about the ordering. Also, we don't check for PSTATE.A in
the pseudo-NMI case. Why is this any different?

>  	if (system_uses_irq_prio_masking())
>  		gic_write_pmr(regs->pmr_save);
>  

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
@ 2022-12-05 18:47     ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-05 18:47 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:17:04 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> As we do for pseudo NMIs add code to our DAIF management which keeps
                          ,
> superpriority interrupts unmasked when we have asynchronous exceptions
> enabled. Since superpriority interrupts are not masked through DAIF like
> pseduo NMIs are we also need to modify the assembler macros for managing
                 ,

> DAIF to ensure that the masking is done in the assembly code. At present
> users of the assembly macros always mask pseudo NMIs.

In patch #5, you say:

"This is not integrated into the existing DAIF macros since we do not
always wish to manage ALLINT along with DAIF and the use of DAIF in
the naming of the existing macros might lead to surprises if ALLINT is
also managed."

It isn't integrated, and yet it is.

> 
> There is a difference to the actual handling between pseudo NMIs
> and superpriority interrupts in the assembly save_and_disable_irq and
> restore_irq macros, these cover both interrupts and FIQs using DAIF

s/,/;/

> without regard for the use of pseudo NMIs so also mask those but are not
                                                              ,

> updated here to mask superpriority interrupts. Given the names it is not
> clear that the behaviour with pseudo NMIs is particularly intentional,

Pseudo-NMIs are still compatible with the standard DAIF behaviour,
where setting PSTATE.I is strictly equivalent to setting PSTATE.ALLINT
when you have architected NMIs.

So I don't really understand your concern here.

> and in any case these macros are only used in the implementation of
> alternatives for software PAN while hardware PAN has been mandatory
> since v8.1 so it is not anticipated that practical systems with support
            ,

> for FEAT_NMI will ever execute the affected code.
> 
> This should be a conservative set of masked regions, we may be able to
> relax this in future, but this should represent a good starting point.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/assembler.h | 11 +++++++++++
>  arch/arm64/include/asm/daifflags.h | 18 ++++++++++++++++++
>  2 files changed, 29 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 88d9779a83c0..e85a7e9af9ae 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -52,19 +52,30 @@ alternative_else_nop_endif
>  
>  	.macro save_and_disable_daif, flags
>  	mrs	\flags, daif
> +        disable_allint
>  	msr	daifset, #0xf
>  	.endm
>  
>  	.macro disable_daif
> +        disable_allint
>  	msr	daifset, #0xf
>  	.endm
>  
>  	.macro enable_daif
>  	msr	daifclr, #0xf
> +	enable_allint
>  	.endm
>  
>  	.macro	restore_daif, flags:req
>  	msr	daif, \flags
> +#ifdef CONFIG_ARM64_NMI
> +alternative_if ARM64_HAS_NMI
> +	/* If async exceptions are unmasked we can take NMIs */
> +	tbnz	\flags, #8, 2004f
> +	msr_s	SYS_ALLINT_CLR, xzr
> +2004:

Please use the usual blah\@ hack for these macros, as you have no
control over the context in which they will expand.

> +alternative_else_nop_endif
> +#endif
>  	.endm
>  
>  	/* IRQ/FIQ are the lowest priority flags, unconditionally unmask the rest. */
> diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
> index b3bed2004342..fda73976068f 100644
> --- a/arch/arm64/include/asm/daifflags.h
> +++ b/arch/arm64/include/asm/daifflags.h
> @@ -10,6 +10,7 @@
>  #include <asm/arch_gicv3.h>
>  #include <asm/barrier.h>
>  #include <asm/cpufeature.h>
> +#include <asm/nmi.h>
>  #include <asm/ptrace.h>
>  
>  #define DAIF_PROCCTX		0
> @@ -35,6 +36,9 @@ static inline void local_daif_mask(void)
>  	if (system_uses_irq_prio_masking())
>  		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
>  
> +	if (system_uses_nmi())
> +		_allint_set();
> +
>  	trace_hardirqs_off();
>  }
>  
> @@ -50,6 +54,12 @@ static inline unsigned long local_daif_save_flags(void)
>  			flags |= PSR_I_BIT | PSR_F_BIT;
>  	}
>  
> +	if (system_uses_nmi()) {
> +		/* If IRQs are masked with ALLINT, reflect in in the flags */
> +		if (read_sysreg_s(SYS_ALLINT) & ALLINT_ALLINT)
> +			flags |= PSR_I_BIT | PSR_F_BIT;
> +	}
> +
>  	return flags;
>  }
>  
> @@ -114,6 +124,10 @@ static inline void local_daif_restore(unsigned long flags)
>  		gic_write_pmr(pmr);
>  	}
>  
> +	/* If we can take asynchronous errors we can take NMIs */
> +	if (system_uses_nmi() && !(flags & PSR_A_BIT))
> +		_allint_clear();
> +
>  	write_sysreg(flags, daif);

This sequence feels odd. With pseudo-NMI, we only allow the NMI to
fire *after* the write to DAIF. With architected NMI, a NMI can fire
*before*. I think that for the time being, we should follow a similar
ordering.

>  
>  	if (irq_disabled)
> @@ -131,6 +145,10 @@ static inline void local_daif_inherit(struct pt_regs *regs)
>  	if (interrupts_enabled(regs))
>  		trace_hardirqs_on();
>  
> +	/* If we can take asynchronous errors we can take NMIs */
> +	if (system_uses_nmi() && !(flags & PSR_A_BIT))
> +		_allint_clear();
> +

Same remark about the ordering. Also, we don't check for PSTATE.A in
the pseudo-NMI case. Why is this any different?

>  	if (system_uses_irq_prio_masking())
>  		gic_write_pmr(regs->pmr_save);
>  

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 09/14] KVM: arm64: Hide FEAT_NMI from guests
  2022-12-05 18:06     ` Marc Zyngier
@ 2022-12-05 19:03       ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-05 19:03 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm


[-- Attachment #1.1: Type: text/plain, Size: 1234 bytes --]

On Mon, Dec 05, 2022 at 06:06:24PM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:

> > FEAT_NMI is not yet useful to guests pending implementation of vGIC
> > support. Mask out the feature from the ID register and prevent guests
> > creating state in ALLINT.ALLINT by activating the trap on write provided
> > in HCRX_EL2.TALLINT when they are running. There is no trap available
> > for reads from ALLINT.

> > We do not need to check for FEAT_HCRX since it is mandatory since v8.7
> > and FEAT_NMI is a v8.8 feature.

> And yet you check for it in hyp-stub.S after having checked for
> FEAT_NMI. What gives?

Being aware that you have a strong preference for not having safety
checks for mandatory features I didn't add any here but noted it so
people could see why they were omitted.  The checks in hyp-stub.S were
probably written before I'd checked the dependency situation out.

I can remove those checks if preferred but TBH given that the failure
mode in hyp-stub.S is typically going to be to die with no output if
something goes wrong it does feel like it's worth the extra couple of
instructions to double check things just in case, especially with the
virtual platforms being so easy to misconfigure.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 09/14] KVM: arm64: Hide FEAT_NMI from guests
@ 2022-12-05 19:03       ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-05 19:03 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

[-- Attachment #1: Type: text/plain, Size: 1234 bytes --]

On Mon, Dec 05, 2022 at 06:06:24PM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:

> > FEAT_NMI is not yet useful to guests pending implementation of vGIC
> > support. Mask out the feature from the ID register and prevent guests
> > creating state in ALLINT.ALLINT by activating the trap on write provided
> > in HCRX_EL2.TALLINT when they are running. There is no trap available
> > for reads from ALLINT.

> > We do not need to check for FEAT_HCRX since it is mandatory since v8.7
> > and FEAT_NMI is a v8.8 feature.

> And yet you check for it in hyp-stub.S after having checked for
> FEAT_NMI. What gives?

Being aware that you have a strong preference for not having safety
checks for mandatory features I didn't add any here but noted it so
people could see why they were omitted.  The checks in hyp-stub.S were
probably written before I'd checked the dependency situation out.

I can remove those checks if preferred but TBH given that the failure
mode in hyp-stub.S is typically going to be to die with no output if
something goes wrong it does feel like it's worth the extra couple of
instructions to double check things just in case, especially with the
virtual platforms being so easy to misconfigure.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 08/14] arm64/cpufeature: Detect PE support for FEAT_NMI
  2022-12-05 18:03     ` Marc Zyngier
@ 2022-12-05 19:32       ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-05 19:32 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm


[-- Attachment #1.1: Type: text/plain, Size: 1384 bytes --]

On Mon, Dec 05, 2022 at 06:03:07PM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:

> > +#ifdef CONFIG_ARM64_NMI
> > +	{
> > +		.desc = "Non-maskable Interrupts",
> > +		.capability = ARM64_HAS_NMI,
> > +		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,

> PSEUDO_NMI uses ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE. What is the
> rational for using a different policy here?

I couldn't identify any issues that the kernel would have if the feature
was present in the hardware but unused so I didn't see the need to be
additionally restrictive.  TBH I'm not 100% clear why the _STRICT is
there for pseudo NMIs, it seemed a bit out of scope for this series to
try to clean that up though.

> The whole thing is way too restrictive: KVM definitely needs to know
> that the feature exists, even if there is no use for it in the host
> kernel. There is no reason why guests shouldn't be able to use this
> even if the host doesn't care about it.

> Which means you need two properties: one that advertises the
> availability of the feature, and one that makes use of it in the
> kernel.

To be clear I think what you're looking for here is a capability that
omits the cross-check with pseudo NMIs rather than something that's
strictly checking the hardware (so ID register overrides will still
apply)?  I've done that locally, my tree currently has capabilites
HAS_NMI and USES_NMI.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 08/14] arm64/cpufeature: Detect PE support for FEAT_NMI
@ 2022-12-05 19:32       ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-05 19:32 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

[-- Attachment #1: Type: text/plain, Size: 1384 bytes --]

On Mon, Dec 05, 2022 at 06:03:07PM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:

> > +#ifdef CONFIG_ARM64_NMI
> > +	{
> > +		.desc = "Non-maskable Interrupts",
> > +		.capability = ARM64_HAS_NMI,
> > +		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,

> PSEUDO_NMI uses ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE. What is the
> rational for using a different policy here?

I couldn't identify any issues that the kernel would have if the feature
was present in the hardware but unused so I didn't see the need to be
additionally restrictive.  TBH I'm not 100% clear why the _STRICT is
there for pseudo NMIs, it seemed a bit out of scope for this series to
try to clean that up though.

> The whole thing is way too restrictive: KVM definitely needs to know
> that the feature exists, even if there is no use for it in the host
> kernel. There is no reason why guests shouldn't be able to use this
> even if the host doesn't care about it.

> Which means you need two properties: one that advertises the
> availability of the feature, and one that makes use of it in the
> kernel.

To be clear I think what you're looking for here is a capability that
omits the cross-check with pseudo NMIs rather than something that's
strictly checking the hardware (so ID register overrides will still
apply)?  I've done that locally, my tree currently has capabilites
HAS_NMI and USES_NMI.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
  2022-12-05 18:47     ` Marc Zyngier
@ 2022-12-05 20:52       ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-05 20:52 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm


[-- Attachment #1.1: Type: text/plain, Size: 2848 bytes --]

On Mon, Dec 05, 2022 at 06:47:45PM +0000, Marc Zyngier wrote:
> On Sat, 12 Nov 2022 15:17:04 +0000,
> Mark Brown <broonie@kernel.org> wrote:

> > As we do for pseudo NMIs add code to our DAIF management which keeps
>                           ,
> > superpriority interrupts unmasked when we have asynchronous exceptions
> > enabled. Since superpriority interrupts are not masked through DAIF like
> > pseduo NMIs are we also need to modify the assembler macros for managing
>                  ,
> 
> > DAIF to ensure that the masking is done in the assembly code. At present
> > users of the assembly macros always mask pseudo NMIs.

> In patch #5, you say:

> "This is not integrated into the existing DAIF macros since we do not
> always wish to manage ALLINT along with DAIF and the use of DAIF in
> the naming of the existing macros might lead to surprises if ALLINT is
> also managed."

> It isn't integrated, and yet it is.

Ah, yes - the note on patch 5 is a bit bitrotted now.  I'll update that.


> > There is a difference to the actual handling between pseudo NMIs
> > and superpriority interrupts in the assembly save_and_disable_irq and
> > restore_irq macros, these cover both interrupts and FIQs using DAIF
> > without regard for the use of pseudo NMIs so also mask those but are not
> > updated here to mask superpriority interrupts. Given the names it is not
> > clear that the behaviour with pseudo NMIs is particularly intentional,

> Pseudo-NMIs are still compatible with the standard DAIF behaviour,
> where setting PSTATE.I is strictly equivalent to setting PSTATE.ALLINT
> when you have architected NMIs.

> So I don't really understand your concern here.

The existing code is fine, the thing here was that unlike the C code
there's no matching management of PMR here where we're adding management
of ALLINT which might raise alarm bells for the reader.  I'll reword a
bit.

> > @@ -131,6 +145,10 @@ static inline void local_daif_inherit(struct pt_regs *regs)
> >  	if (interrupts_enabled(regs))
> >  		trace_hardirqs_on();
> >  
> > +	/* If we can take asynchronous errors we can take NMIs */
> > +	if (system_uses_nmi() && !(flags & PSR_A_BIT))
> > +		_allint_clear();
> > +

> Same remark about the ordering. Also, we don't check for PSTATE.A in
> the pseudo-NMI case. Why is this any different?

For NMIs we're making it track PSTATE.A so I wrote things that way to
make it clear that this should be what the end result is.  I've already
got a change locally which makes this even more explicit by having both
the set and clear cases rather than just only the clear cases.  You're
right though that we should achieve the same effect by restoring what
was saved in regs->pstate which is the equivalent of what pseudo NMIs
are doing so I'll change to do that.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
@ 2022-12-05 20:52       ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-05 20:52 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

[-- Attachment #1: Type: text/plain, Size: 2848 bytes --]

On Mon, Dec 05, 2022 at 06:47:45PM +0000, Marc Zyngier wrote:
> On Sat, 12 Nov 2022 15:17:04 +0000,
> Mark Brown <broonie@kernel.org> wrote:

> > As we do for pseudo NMIs add code to our DAIF management which keeps
>                           ,
> > superpriority interrupts unmasked when we have asynchronous exceptions
> > enabled. Since superpriority interrupts are not masked through DAIF like
> > pseduo NMIs are we also need to modify the assembler macros for managing
>                  ,
> 
> > DAIF to ensure that the masking is done in the assembly code. At present
> > users of the assembly macros always mask pseudo NMIs.

> In patch #5, you say:

> "This is not integrated into the existing DAIF macros since we do not
> always wish to manage ALLINT along with DAIF and the use of DAIF in
> the naming of the existing macros might lead to surprises if ALLINT is
> also managed."

> It isn't integrated, and yet it is.

Ah, yes - the note on patch 5 is a bit bitrotted now.  I'll update that.


> > There is a difference to the actual handling between pseudo NMIs
> > and superpriority interrupts in the assembly save_and_disable_irq and
> > restore_irq macros, these cover both interrupts and FIQs using DAIF
> > without regard for the use of pseudo NMIs so also mask those but are not
> > updated here to mask superpriority interrupts. Given the names it is not
> > clear that the behaviour with pseudo NMIs is particularly intentional,

> Pseudo-NMIs are still compatible with the standard DAIF behaviour,
> where setting PSTATE.I is strictly equivalent to setting PSTATE.ALLINT
> when you have architected NMIs.

> So I don't really understand your concern here.

The existing code is fine, the thing here was that unlike the C code
there's no matching management of PMR here where we're adding management
of ALLINT which might raise alarm bells for the reader.  I'll reword a
bit.

> > @@ -131,6 +145,10 @@ static inline void local_daif_inherit(struct pt_regs *regs)
> >  	if (interrupts_enabled(regs))
> >  		trace_hardirqs_on();
> >  
> > +	/* If we can take asynchronous errors we can take NMIs */
> > +	if (system_uses_nmi() && !(flags & PSR_A_BIT))
> > +		_allint_clear();
> > +

> Same remark about the ordering. Also, we don't check for PSTATE.A in
> the pseudo-NMI case. Why is this any different?

For NMIs we're making it track PSTATE.A so I wrote things that way to
make it clear that this should be what the end result is.  I've already
got a change locally which makes this even more explicit by having both
the set and clear cases rather than just only the clear cases.  You're
right though that we should achieve the same effect by restoring what
was saved in regs->pstate which is the equivalent of what pseudo NMIs
are doing so I'll change to do that.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 12/14] arm64/nmi: Add handling of superpriority interrupts as NMIs
  2022-11-12 15:17   ` Mark Brown
@ 2022-12-07 11:03     ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 11:03 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:17:06 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> Our goal with superpriority interrupts is to use them as NMIs, taking
> advantage of the much smaller regions where they are masked to allow
> prompt handling of the most time critical interrupts.
> 
> When an interrupt configured with superpriority we will enter EL1 as
> normal for any interrupt, the presence of a superpriority interrupt is
> indicated with a status bit in ISR_EL1. We use this to check for the
> presence of a superpriority interrupt before we unmask anything in
> elX_interrupt(), reporting without unmasking any interrupts. If no
> superpriority interrupt is present then we handle normal interrupts as
> normal, superpriority interrupts will be unmasked while doing so as a
> result of setting DAIF_PROCCTX.
> 
> Both IRQs and FIQs may be configured with superpriority so we handle
> both, passing an additional root handler into the elX_interrupt()
> function along with the mask for the bit in ISR_EL1 which indicates the
> presence of the relevant kind of superpriority interrupt. These root
> handlers can be configured by the interrupt controller similarly to the
> root handlers for normal interrupts using the newly added
> set_handle_nmi_irq() and set_handle_nmi_fiq() functions.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/irq.h     |  2 ++
>  arch/arm64/kernel/entry-common.c | 55 +++++++++++++++++++++++++++-----
>  arch/arm64/kernel/irq.c          | 32 +++++++++++++++++++
>  3 files changed, 81 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/irq.h b/arch/arm64/include/asm/irq.h
> index fac08e18bcd5..2ab05d899bf6 100644
> --- a/arch/arm64/include/asm/irq.h
> +++ b/arch/arm64/include/asm/irq.h
> @@ -8,6 +8,8 @@
>  
>  struct pt_regs;
>  
> +int set_handle_nmi_irq(void (*handle_irq)(struct pt_regs *));
> +int set_handle_nmi_fiq(void (*handle_fiq)(struct pt_regs *));

I'm not overly keen on adding hooks that are not used, and I can't
really foresee a use case for a FIQ NMI at the moment (there is no
plan to use Group-0 interrupts in VMs when the GIC is enabled, and the
only interrupt controller we have that uses FIQ doesn't even have
priorities, let alone NMIs).

>  int set_handle_irq(void (*handle_irq)(struct pt_regs *));
>  #define set_handle_irq	set_handle_irq
>  int set_handle_fiq(void (*handle_fiq)(struct pt_regs *));
> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> index 9173fad279af..eb6fc718737e 100644
> --- a/arch/arm64/kernel/entry-common.c
> +++ b/arch/arm64/kernel/entry-common.c
> @@ -278,6 +278,8 @@ static void do_interrupt_handler(struct pt_regs *regs,
>  	set_irq_regs(old_regs);
>  }
>  
> +extern void (*handle_arch_nmi_irq)(struct pt_regs *);
> +extern void (*handle_arch_nmi_fiq)(struct pt_regs *);
>  extern void (*handle_arch_irq)(struct pt_regs *);
>  extern void (*handle_arch_fiq)(struct pt_regs *);
>  
> @@ -453,6 +455,14 @@ asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
>  	}
>  }
>  
> +static __always_inline void __el1_nmi(struct pt_regs *regs,
> +				      void (*handler)(struct pt_regs *))
> +{
> +	arm64_enter_nmi(regs);
> +	do_interrupt_handler(regs, handler);
> +	arm64_exit_nmi(regs);
> +}
> +
>  static __always_inline void __el1_pnmi(struct pt_regs *regs,
>  				       void (*handler)(struct pt_regs *))
>  {
> @@ -474,9 +484,19 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
>  
>  	exit_to_kernel_mode(regs);
>  }
> -static void noinstr el1_interrupt(struct pt_regs *regs,
> -				  void (*handler)(struct pt_regs *))
> +
> +static void noinstr el1_interrupt(struct pt_regs *regs, u64 nmi_flag,
> +				  void (*handler)(struct pt_regs *),
> +				  void (*nmi_handler)(struct pt_regs *))
>  {
> +	if (system_uses_nmi()) {
> +		/* Is there a NMI to handle? */
> +		if (read_sysreg(isr_el1) & nmi_flag) {

Better written as:

	if (system_uses_nmi() && (read_sysreg(isr_el1) & nmi_flag)) {

> +			__el1_nmi(regs, nmi_handler);
> +			return;
> +		}
> +	}
> +
>  	write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
>  
>  	if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs))
> @@ -487,12 +507,12 @@ static void noinstr el1_interrupt(struct pt_regs *regs,
>  
>  asmlinkage void noinstr el1h_64_irq_handler(struct pt_regs *regs)
>  {
> -	el1_interrupt(regs, handle_arch_irq);
> +	el1_interrupt(regs, ISR_EL1_IS, handle_arch_irq, handle_arch_nmi_irq);
>  }
>  
>  asmlinkage void noinstr el1h_64_fiq_handler(struct pt_regs *regs)
>  {
> -	el1_interrupt(regs, handle_arch_fiq);
> +	el1_interrupt(regs, ISR_EL1_FS, handle_arch_fiq, handle_arch_nmi_fiq);
>  }
>  
>  asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs)
> @@ -701,11 +721,30 @@ asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs)
>  	}
>  }
>  
> -static void noinstr el0_interrupt(struct pt_regs *regs,
> -				  void (*handler)(struct pt_regs *))
> +static void noinstr el0_interrupt(struct pt_regs *regs, u64 nmi_flag,
> +				  void (*handler)(struct pt_regs *),
> +				  void (*nmi_handler)(struct pt_regs *))
>  {
>  	enter_from_user_mode(regs);
>  
> +	if (system_uses_nmi()) {
> +		/* Is there a NMI to handle? */
> +		if (read_sysreg(isr_el1) & nmi_flag) {

Same thing.

> +			/*
> +			 * Any system with FEAT_NMI should not be
> +			 * affected by Spectre v2 so we don't mitigate
> +			 * here.
> +			 */

Why? I don't see a good reason not to mitigate it, specially when the
mitigation is guarded by cpus_have_const_cap(ARM64_SPECTRE_V2). Maybe
you can explain what the rationale is for this.

> +
> +			arm64_enter_nmi(regs);
> +			do_interrupt_handler(regs, nmi_handler);
> +			arm64_exit_nmi(regs);
> +
> +			exit_to_user_mode(regs);
> +			return;
> +		}
> +	}
> +
>  	write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
>  
>  	if (regs->pc & BIT(55))
> @@ -720,7 +759,7 @@ static void noinstr el0_interrupt(struct pt_regs *regs,
>  
>  static void noinstr __el0_irq_handler_common(struct pt_regs *regs)
>  {
> -	el0_interrupt(regs, handle_arch_irq);
> +	el0_interrupt(regs, ISR_EL1_IS, handle_arch_irq, handle_arch_nmi_irq);
>  }
>  
>  asmlinkage void noinstr el0t_64_irq_handler(struct pt_regs *regs)
> @@ -730,7 +769,7 @@ asmlinkage void noinstr el0t_64_irq_handler(struct pt_regs *regs)
>  
>  static void noinstr __el0_fiq_handler_common(struct pt_regs *regs)
>  {
> -	el0_interrupt(regs, handle_arch_fiq);
> +	el0_interrupt(regs, ISR_EL1_FS, handle_arch_fiq, handle_arch_nmi_fiq);
>  }
>  
>  asmlinkage void noinstr el0t_64_fiq_handler(struct pt_regs *regs)
> diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
> index 38dbd3828f13..77a1ea90b244 100644
> --- a/arch/arm64/kernel/irq.c
> +++ b/arch/arm64/kernel/irq.c
> @@ -85,6 +85,16 @@ void do_softirq_own_stack(void)
>  }
>  #endif
>  
> +static void default_handle_nmi_irq(struct pt_regs *regs)
> +{
> +	panic("Superpriority IRQ taken without a root NMI IRQ handler\n");
> +}
> +
> +static void default_handle_nmi_fiq(struct pt_regs *regs)
> +{
> +	panic("Superpriority FIQ taken without a root NMI FIQ handler\n");
> +}
> +
>  static void default_handle_irq(struct pt_regs *regs)
>  {
>  	panic("IRQ taken without a root IRQ handler\n");
> @@ -95,9 +105,31 @@ static void default_handle_fiq(struct pt_regs *regs)
>  	panic("FIQ taken without a root FIQ handler\n");
>  }
>  
> +void (*handle_arch_nmi_irq)(struct pt_regs *) __ro_after_init = default_handle_nmi_irq;
> +void (*handle_arch_nmi_fiq)(struct pt_regs *) __ro_after_init = default_handle_nmi_fiq;
>  void (*handle_arch_irq)(struct pt_regs *) __ro_after_init = default_handle_irq;
>  void (*handle_arch_fiq)(struct pt_regs *) __ro_after_init = default_handle_fiq;
>  
> +int __init set_handle_nmi_irq(void (*handle_nmi_irq)(struct pt_regs *))
> +{
> +	if (handle_arch_nmi_irq != default_handle_nmi_irq)
> +		return -EBUSY;
> +
> +	handle_arch_nmi_irq = handle_nmi_irq;
> +	pr_info("Root superpriority IRQ handler: %ps\n", handle_nmi_irq);
> +	return 0;
> +}
> +
> +int __init set_handle_nmi_fiq(void (*handle_nmi_fiq)(struct pt_regs *))
> +{
> +	if (handle_arch_nmi_fiq != default_handle_nmi_fiq)
> +		return -EBUSY;
> +
> +	handle_arch_nmi_fiq = handle_nmi_fiq;
> +	pr_info("Root superpriority FIQ handler: %ps\n", handle_nmi_fiq);
> +	return 0;
> +}
> +
>  int __init set_handle_irq(void (*handle_irq)(struct pt_regs *))
>  {
>  	if (handle_arch_irq != default_handle_irq)

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 12/14] arm64/nmi: Add handling of superpriority interrupts as NMIs
@ 2022-12-07 11:03     ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 11:03 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:17:06 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> Our goal with superpriority interrupts is to use them as NMIs, taking
> advantage of the much smaller regions where they are masked to allow
> prompt handling of the most time critical interrupts.
> 
> When an interrupt configured with superpriority we will enter EL1 as
> normal for any interrupt, the presence of a superpriority interrupt is
> indicated with a status bit in ISR_EL1. We use this to check for the
> presence of a superpriority interrupt before we unmask anything in
> elX_interrupt(), reporting without unmasking any interrupts. If no
> superpriority interrupt is present then we handle normal interrupts as
> normal, superpriority interrupts will be unmasked while doing so as a
> result of setting DAIF_PROCCTX.
> 
> Both IRQs and FIQs may be configured with superpriority so we handle
> both, passing an additional root handler into the elX_interrupt()
> function along with the mask for the bit in ISR_EL1 which indicates the
> presence of the relevant kind of superpriority interrupt. These root
> handlers can be configured by the interrupt controller similarly to the
> root handlers for normal interrupts using the newly added
> set_handle_nmi_irq() and set_handle_nmi_fiq() functions.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/irq.h     |  2 ++
>  arch/arm64/kernel/entry-common.c | 55 +++++++++++++++++++++++++++-----
>  arch/arm64/kernel/irq.c          | 32 +++++++++++++++++++
>  3 files changed, 81 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/irq.h b/arch/arm64/include/asm/irq.h
> index fac08e18bcd5..2ab05d899bf6 100644
> --- a/arch/arm64/include/asm/irq.h
> +++ b/arch/arm64/include/asm/irq.h
> @@ -8,6 +8,8 @@
>  
>  struct pt_regs;
>  
> +int set_handle_nmi_irq(void (*handle_irq)(struct pt_regs *));
> +int set_handle_nmi_fiq(void (*handle_fiq)(struct pt_regs *));

I'm not overly keen on adding hooks that are not used, and I can't
really foresee a use case for a FIQ NMI at the moment (there is no
plan to use Group-0 interrupts in VMs when the GIC is enabled, and the
only interrupt controller we have that uses FIQ doesn't even have
priorities, let alone NMIs).

>  int set_handle_irq(void (*handle_irq)(struct pt_regs *));
>  #define set_handle_irq	set_handle_irq
>  int set_handle_fiq(void (*handle_fiq)(struct pt_regs *));
> diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
> index 9173fad279af..eb6fc718737e 100644
> --- a/arch/arm64/kernel/entry-common.c
> +++ b/arch/arm64/kernel/entry-common.c
> @@ -278,6 +278,8 @@ static void do_interrupt_handler(struct pt_regs *regs,
>  	set_irq_regs(old_regs);
>  }
>  
> +extern void (*handle_arch_nmi_irq)(struct pt_regs *);
> +extern void (*handle_arch_nmi_fiq)(struct pt_regs *);
>  extern void (*handle_arch_irq)(struct pt_regs *);
>  extern void (*handle_arch_fiq)(struct pt_regs *);
>  
> @@ -453,6 +455,14 @@ asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
>  	}
>  }
>  
> +static __always_inline void __el1_nmi(struct pt_regs *regs,
> +				      void (*handler)(struct pt_regs *))
> +{
> +	arm64_enter_nmi(regs);
> +	do_interrupt_handler(regs, handler);
> +	arm64_exit_nmi(regs);
> +}
> +
>  static __always_inline void __el1_pnmi(struct pt_regs *regs,
>  				       void (*handler)(struct pt_regs *))
>  {
> @@ -474,9 +484,19 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
>  
>  	exit_to_kernel_mode(regs);
>  }
> -static void noinstr el1_interrupt(struct pt_regs *regs,
> -				  void (*handler)(struct pt_regs *))
> +
> +static void noinstr el1_interrupt(struct pt_regs *regs, u64 nmi_flag,
> +				  void (*handler)(struct pt_regs *),
> +				  void (*nmi_handler)(struct pt_regs *))
>  {
> +	if (system_uses_nmi()) {
> +		/* Is there a NMI to handle? */
> +		if (read_sysreg(isr_el1) & nmi_flag) {

Better written as:

	if (system_uses_nmi() && (read_sysreg(isr_el1) & nmi_flag)) {

> +			__el1_nmi(regs, nmi_handler);
> +			return;
> +		}
> +	}
> +
>  	write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
>  
>  	if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs))
> @@ -487,12 +507,12 @@ static void noinstr el1_interrupt(struct pt_regs *regs,
>  
>  asmlinkage void noinstr el1h_64_irq_handler(struct pt_regs *regs)
>  {
> -	el1_interrupt(regs, handle_arch_irq);
> +	el1_interrupt(regs, ISR_EL1_IS, handle_arch_irq, handle_arch_nmi_irq);
>  }
>  
>  asmlinkage void noinstr el1h_64_fiq_handler(struct pt_regs *regs)
>  {
> -	el1_interrupt(regs, handle_arch_fiq);
> +	el1_interrupt(regs, ISR_EL1_FS, handle_arch_fiq, handle_arch_nmi_fiq);
>  }
>  
>  asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs)
> @@ -701,11 +721,30 @@ asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs)
>  	}
>  }
>  
> -static void noinstr el0_interrupt(struct pt_regs *regs,
> -				  void (*handler)(struct pt_regs *))
> +static void noinstr el0_interrupt(struct pt_regs *regs, u64 nmi_flag,
> +				  void (*handler)(struct pt_regs *),
> +				  void (*nmi_handler)(struct pt_regs *))
>  {
>  	enter_from_user_mode(regs);
>  
> +	if (system_uses_nmi()) {
> +		/* Is there a NMI to handle? */
> +		if (read_sysreg(isr_el1) & nmi_flag) {

Same thing.

> +			/*
> +			 * Any system with FEAT_NMI should not be
> +			 * affected by Spectre v2 so we don't mitigate
> +			 * here.
> +			 */

Why? I don't see a good reason not to mitigate it, specially when the
mitigation is guarded by cpus_have_const_cap(ARM64_SPECTRE_V2). Maybe
you can explain what the rationale is for this.

> +
> +			arm64_enter_nmi(regs);
> +			do_interrupt_handler(regs, nmi_handler);
> +			arm64_exit_nmi(regs);
> +
> +			exit_to_user_mode(regs);
> +			return;
> +		}
> +	}
> +
>  	write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
>  
>  	if (regs->pc & BIT(55))
> @@ -720,7 +759,7 @@ static void noinstr el0_interrupt(struct pt_regs *regs,
>  
>  static void noinstr __el0_irq_handler_common(struct pt_regs *regs)
>  {
> -	el0_interrupt(regs, handle_arch_irq);
> +	el0_interrupt(regs, ISR_EL1_IS, handle_arch_irq, handle_arch_nmi_irq);
>  }
>  
>  asmlinkage void noinstr el0t_64_irq_handler(struct pt_regs *regs)
> @@ -730,7 +769,7 @@ asmlinkage void noinstr el0t_64_irq_handler(struct pt_regs *regs)
>  
>  static void noinstr __el0_fiq_handler_common(struct pt_regs *regs)
>  {
> -	el0_interrupt(regs, handle_arch_fiq);
> +	el0_interrupt(regs, ISR_EL1_FS, handle_arch_fiq, handle_arch_nmi_fiq);
>  }
>  
>  asmlinkage void noinstr el0t_64_fiq_handler(struct pt_regs *regs)
> diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
> index 38dbd3828f13..77a1ea90b244 100644
> --- a/arch/arm64/kernel/irq.c
> +++ b/arch/arm64/kernel/irq.c
> @@ -85,6 +85,16 @@ void do_softirq_own_stack(void)
>  }
>  #endif
>  
> +static void default_handle_nmi_irq(struct pt_regs *regs)
> +{
> +	panic("Superpriority IRQ taken without a root NMI IRQ handler\n");
> +}
> +
> +static void default_handle_nmi_fiq(struct pt_regs *regs)
> +{
> +	panic("Superpriority FIQ taken without a root NMI FIQ handler\n");
> +}
> +
>  static void default_handle_irq(struct pt_regs *regs)
>  {
>  	panic("IRQ taken without a root IRQ handler\n");
> @@ -95,9 +105,31 @@ static void default_handle_fiq(struct pt_regs *regs)
>  	panic("FIQ taken without a root FIQ handler\n");
>  }
>  
> +void (*handle_arch_nmi_irq)(struct pt_regs *) __ro_after_init = default_handle_nmi_irq;
> +void (*handle_arch_nmi_fiq)(struct pt_regs *) __ro_after_init = default_handle_nmi_fiq;
>  void (*handle_arch_irq)(struct pt_regs *) __ro_after_init = default_handle_irq;
>  void (*handle_arch_fiq)(struct pt_regs *) __ro_after_init = default_handle_fiq;
>  
> +int __init set_handle_nmi_irq(void (*handle_nmi_irq)(struct pt_regs *))
> +{
> +	if (handle_arch_nmi_irq != default_handle_nmi_irq)
> +		return -EBUSY;
> +
> +	handle_arch_nmi_irq = handle_nmi_irq;
> +	pr_info("Root superpriority IRQ handler: %ps\n", handle_nmi_irq);
> +	return 0;
> +}
> +
> +int __init set_handle_nmi_fiq(void (*handle_nmi_fiq)(struct pt_regs *))
> +{
> +	if (handle_arch_nmi_fiq != default_handle_nmi_fiq)
> +		return -EBUSY;
> +
> +	handle_arch_nmi_fiq = handle_nmi_fiq;
> +	pr_info("Root superpriority FIQ handler: %ps\n", handle_nmi_fiq);
> +	return 0;
> +}
> +
>  int __init set_handle_irq(void (*handle_irq)(struct pt_regs *))
>  {
>  	if (handle_arch_irq != default_handle_irq)

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 12/14] arm64/nmi: Add handling of superpriority interrupts as NMIs
  2022-12-07 11:03     ` Marc Zyngier
@ 2022-12-07 13:24       ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-07 13:24 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm


[-- Attachment #1.1: Type: text/plain, Size: 1763 bytes --]

On Wed, Dec 07, 2022 at 11:03:26AM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:

> > +int set_handle_nmi_irq(void (*handle_irq)(struct pt_regs *));
> > +int set_handle_nmi_fiq(void (*handle_fiq)(struct pt_regs *));

> I'm not overly keen on adding hooks that are not used, and I can't
> really foresee a use case for a FIQ NMI at the moment (there is no
> plan to use Group-0 interrupts in VMs when the GIC is enabled, and the
> only interrupt controller we have that uses FIQ doesn't even have
> priorities, let alone NMIs).

Sure, I don't care either way - I wasn't sure if people would prefer
symmetry/completeness or minimal usage so took a guess.  I did consider
that the FIQ user might decide to implement NMIs on the basis that
they're easier to use than priorities but it's five minutes work to add
the API back when needed if that does happen.

> > +			/*
> > +			 * Any system with FEAT_NMI should not be
> > +			 * affected by Spectre v2 so we don't mitigate
> > +			 * here.
> > +			 */

> Why? I don't see a good reason not to mitigate it, specially when the
> mitigation is guarded by cpus_have_const_cap(ARM64_SPECTRE_V2). Maybe
> you can explain what the rationale is for this.

Any CPU new enough to have FEAT_NMI is architecturally required to also
have FEAT_CSV2 since that's mandatory since v8.5 and FEAT_NMI is a v8.8
feature.  FEAT_CSV2 means the hardware doesn't need the mitigation, and
we check for it in spectre_v2_get_cpu_hw_mitigation_state().  I was
trying to thread the needle between doing it for a combination of
symmetry and defensive programming and people seeing that the test would
always be false and should therefore be removed, especially in a hot
path like this. 

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 12/14] arm64/nmi: Add handling of superpriority interrupts as NMIs
@ 2022-12-07 13:24       ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-07 13:24 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

[-- Attachment #1: Type: text/plain, Size: 1763 bytes --]

On Wed, Dec 07, 2022 at 11:03:26AM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:

> > +int set_handle_nmi_irq(void (*handle_irq)(struct pt_regs *));
> > +int set_handle_nmi_fiq(void (*handle_fiq)(struct pt_regs *));

> I'm not overly keen on adding hooks that are not used, and I can't
> really foresee a use case for a FIQ NMI at the moment (there is no
> plan to use Group-0 interrupts in VMs when the GIC is enabled, and the
> only interrupt controller we have that uses FIQ doesn't even have
> priorities, let alone NMIs).

Sure, I don't care either way - I wasn't sure if people would prefer
symmetry/completeness or minimal usage so took a guess.  I did consider
that the FIQ user might decide to implement NMIs on the basis that
they're easier to use than priorities but it's five minutes work to add
the API back when needed if that does happen.

> > +			/*
> > +			 * Any system with FEAT_NMI should not be
> > +			 * affected by Spectre v2 so we don't mitigate
> > +			 * here.
> > +			 */

> Why? I don't see a good reason not to mitigate it, specially when the
> mitigation is guarded by cpus_have_const_cap(ARM64_SPECTRE_V2). Maybe
> you can explain what the rationale is for this.

Any CPU new enough to have FEAT_NMI is architecturally required to also
have FEAT_CSV2 since that's mandatory since v8.5 and FEAT_NMI is a v8.8
feature.  FEAT_CSV2 means the hardware doesn't need the mitigation, and
we check for it in spectre_v2_get_cpu_hw_mitigation_state().  I was
trying to thread the needle between doing it for a combination of
symmetry and defensive programming and people seeing that the test would
always be false and should therefore be removed, especially in a hot
path like this. 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 14/14] irqchip/gic-v3: Implement FEAT_GICv3_NMI support
  2022-11-12 15:17   ` Mark Brown
@ 2022-12-07 15:20     ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 15:20 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:17:08 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> From: Lorenzo Pieralisi <lpieralisi@kernel.org>
> 
> The FEAT_GICv3_NMI GIC feature coupled with the CPU FEAT_NMI enables
> handling NMI interrupts in HW on aarch64, by adding a superpriority
> interrupt to the existing GIC priority scheme.
> 
> Implement GIC driver support for the FEAT_GICv3_NMI feature.
> 
> Rename gic_supports_nmi() helper function to gic_supports_pseudo_nmis()
> to make the pseudo NMIs code path clearer and more explicit.

Please make this particular change a separate patch. It will make it a
lot clearer what is the added logic. And maybe drop the final 's' in
gic_supports_pseudo_nmis.

> 
> Check, through the ARM64 capabilitity infrastructure, if support
> for FEAT_NMI was detected on the core and the system has not overridden
> the detection and forced pseudo-NMIs enablement.
> 
> If FEAT_NMI is detected, it was not overridden (check embedded in the
> system_uses_nmi() call) and the GIC supports the FEAT_GICv3_NMI feature,
> install an NMI handler and initialize NMIs related HW GIC registers.
> 
> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org>
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  drivers/irqchip/irq-gic-v3.c       | 143 ++++++++++++++++++++++++-----
>  include/linux/irqchip/arm-gic-v3.h |   4 +
>  2 files changed, 125 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> index 34d58567b78d..dc45e1093e7b 100644
> --- a/drivers/irqchip/irq-gic-v3.c
> +++ b/drivers/irqchip/irq-gic-v3.c
> @@ -54,6 +54,7 @@ struct gic_chip_data {
>  	u32			nr_redist_regions;
>  	u64			flags;
>  	bool			has_rss;
> +	bool			has_nmi;
>  	unsigned int		ppi_nr;
>  	struct partition_desc	**ppi_descs;
>  };
> @@ -145,6 +146,20 @@ enum gic_intid_range {
>  	__INVALID_RANGE__
>  };
>  
> +#ifdef CONFIG_ARM64
> +#include <asm/cpufeature.h>
> +
> +static inline bool has_v3_3_nmi(void)

For consistency, something along the lines of 'gic_supports_v3_3_nmi'
would be better. And drop the inline which the compiler should be able
to figure out on its own.

Also consider placing all the arm64-special stuff under the same
#define (we already have one for some ugly Cavium crap).

> +{
> +	return gic_data.has_nmi && system_uses_nmi();
> +}
> +#else
> +static inline bool has_v3_3_nmi(void)
> +{
> +	return false;
> +}
> +#endif
> +
>  static enum gic_intid_range __get_intid_range(irq_hw_number_t hwirq)
>  {
>  	switch (hwirq) {
> @@ -350,6 +365,42 @@ static int gic_peek_irq(struct irq_data *d, u32 offset)
>  	return !!(readl_relaxed(base + offset + (index / 32) * 4) & mask);
>  }
>  
> +static DEFINE_RAW_SPINLOCK(irq_controller_lock);

Move this up together with the rest of the static data. And maybe call
it gic_nmi_lock so that we know what it protects.

> +
> +static void gic_irq_configure_nmi(struct irq_data *d, bool enable)
> +{
> +	void __iomem *base, *addr;
> +	u32 offset, index, mask, val;
> +
> +	offset = convert_offset_index(d, GICD_INMIR, &index);
> +	mask = 1 << (index % 32);
> +
> +	if (gic_irq_in_rdist(d))
> +		base = gic_data_rdist_sgi_base();
> +	else
> +		base = gic_data.dist_base;
> +
> +	addr = base + offset + (index / 32) * 4;
> +
> +	raw_spin_lock(&irq_controller_lock);
> +
> +	val = readl_relaxed(addr);
> +	val = enable ? (val | mask) : (val & ~mask);

If you make val an unsigned long, you can write this as:

	__assign_bit(index % 32, &val, enable);

and then you can drop the mask.

> +	writel_relaxed(val, addr);
> +
> +	raw_spin_unlock(&irq_controller_lock);
> +}
> +
> +static void gic_irq_enable_nmi(struct irq_data *d)
> +{
> +	gic_irq_configure_nmi(d, true);
> +}
> +
> +static void gic_irq_disable_nmi(struct irq_data *d)
> +{
> +	gic_irq_configure_nmi(d, false);
> +}
> +
>  static void gic_poke_irq(struct irq_data *d, u32 offset)
>  {
>  	void __iomem *base;
> @@ -395,7 +446,7 @@ static void gic_unmask_irq(struct irq_data *d)
>  	gic_poke_irq(d, GICD_ISENABLER);
>  }
>  
> -static inline bool gic_supports_nmi(void)
> +static inline bool gic_supports_pseudo_nmis(void)
>  {
>  	return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) &&
>  	       static_branch_likely(&supports_pseudo_nmis);
> @@ -491,7 +542,7 @@ static int gic_irq_nmi_setup(struct irq_data *d)
>  {
>  	struct irq_desc *desc = irq_to_desc(d->irq);
>  
> -	if (!gic_supports_nmi())
> +	if (!gic_supports_pseudo_nmis() && !has_v3_3_nmi())
>  		return -EINVAL;
>  
>  	if (gic_peek_irq(d, GICD_ISENABLER)) {
> @@ -519,7 +570,10 @@ static int gic_irq_nmi_setup(struct irq_data *d)
>  		desc->handle_irq = handle_fasteoi_nmi;
>  	}
>  
> -	gic_irq_set_prio(d, GICD_INT_NMI_PRI);
> +	if (has_v3_3_nmi())
> +		gic_irq_enable_nmi(d);
> +	else
> +		gic_irq_set_prio(d, GICD_INT_NMI_PRI);
>  
>  	return 0;
>  }
> @@ -528,7 +582,7 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
>  {
>  	struct irq_desc *desc = irq_to_desc(d->irq);
>  
> -	if (WARN_ON(!gic_supports_nmi()))
> +	if (WARN_ON(!gic_supports_pseudo_nmis() && !has_v3_3_nmi()))
>  		return;
>  
>  	if (gic_peek_irq(d, GICD_ISENABLER)) {
> @@ -554,7 +608,10 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
>  		desc->handle_irq = handle_fasteoi_irq;
>  	}
>  
> -	gic_irq_set_prio(d, GICD_INT_DEF_PRI);
> +	if (has_v3_3_nmi())
> +		gic_irq_disable_nmi(d);
> +	else
> +		gic_irq_set_prio(d, GICD_INT_DEF_PRI);
>  }
>  
>  static void gic_eoi_irq(struct irq_data *d)
> @@ -674,7 +731,7 @@ static inline void gic_complete_ack(u32 irqnr)
>  
>  static bool gic_rpr_is_nmi_prio(void)
>  {
> -	if (!gic_supports_nmi())
> +	if (!gic_supports_pseudo_nmis())
>  		return false;
>  
>  	return unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI));
> @@ -706,7 +763,8 @@ static void __gic_handle_nmi(u32 irqnr, struct pt_regs *regs)
>  	gic_complete_ack(irqnr);
>  
>  	if (generic_handle_domain_nmi(gic_data.domain, irqnr)) {
> -		WARN_ONCE(true, "Unexpected pseudo-NMI (irqnr %u)\n", irqnr);
> +		WARN_ONCE(true, "Unexpected %sNMI (irqnr %u)\n",
> +			  gic_supports_pseudo_nmis() ? "pseudo-" : "", irqnr);
>  		gic_deactivate_unhandled(irqnr);
>  	}
>  }
> @@ -782,9 +840,37 @@ static void __gic_handle_irq_from_irqsoff(struct pt_regs *regs)
>  	__gic_handle_nmi(irqnr, regs);
>  }
>  
> +#ifdef CONFIG_ARM64
> +static inline u64 gic_read_nmiar(void)
> +{
> +	u64 irqstat;
> +
> +	irqstat = read_sysreg_s(SYS_ICC_NMIAR1_EL1);
> +
> +	dsb(sy);
> +
> +	return irqstat;
> +}
> +
> +static asmlinkage void __exception_irq_entry gic_handle_nmi_irq(struct pt_regs *regs)

I think this asmlinkage has been cargo-culted for a long time, and
isn't relevant anymore, as we don't get here directly from some
assembler code.

> +{
> +	u32 irqnr = gic_read_nmiar();

The only reason we indirect reads of IAR are for the sake of
AArch32. Since we don't support NMIs for this architecture, and that
this code is entirely behind a #ifdef, just inline the read of
NMIAIR1_EL1 here.

> +
> +	__gic_handle_nmi(irqnr, regs);
> +}
> +
> +static inline void gic_setup_nmi_handler(void)
> +{
> +	if (has_v3_3_nmi())
> +		set_handle_nmi_irq(gic_handle_nmi_irq);
> +}
> +#else
> +static inline void gic_setup_nmi_handler(void) { }
> +#endif
> +
>  static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
>  {
> -	if (unlikely(gic_supports_nmi() && !interrupts_enabled(regs)))
> +	if (unlikely(gic_supports_pseudo_nmis() && !interrupts_enabled(regs)))
>  		__gic_handle_irq_from_irqsoff(regs);
>  	else
>  		__gic_handle_irq_from_irqson(regs);
> @@ -1072,7 +1158,7 @@ static void gic_cpu_sys_reg_init(void)
>  	/* Set priority mask register */
>  	if (!gic_prio_masking_enabled()) {
>  		write_gicreg(DEFAULT_PMR_VALUE, ICC_PMR_EL1);
> -	} else if (gic_supports_nmi()) {
> +	} else if (gic_supports_pseudo_nmis()) {
>  		/*
>  		 * Mismatch configuration with boot CPU, the system is likely
>  		 * to die as interrupt masking will not work properly on all
> @@ -1753,20 +1839,8 @@ static const struct gic_quirk gic_quirks[] = {
>  	}
>  };
>  
> -static void gic_enable_nmi_support(void)
> +static void gic_enable_pseudo_nmis(void)
>  {
> -	int i;
> -
> -	if (!gic_prio_masking_enabled())
> -		return;
> -
> -	ppi_nmi_refs = kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERNEL);
> -	if (!ppi_nmi_refs)
> -		return;
> -
> -	for (i = 0; i < gic_data.ppi_nr; i++)
> -		refcount_set(&ppi_nmi_refs[i], 0);
> -
>  	/*
>  	 * Linux itself doesn't use 1:N distribution, so has no need to
>  	 * set PMHE. The only reason to have it set is if EL3 requires it
> @@ -1809,6 +1883,28 @@ static void gic_enable_nmi_support(void)
>  		static_branch_enable(&gic_nonsecure_priorities);
>  
>  	static_branch_enable(&supports_pseudo_nmis);
> +}
> +
> +static void gic_enable_nmi_support(void)
> +{
> +	int i;
> +
> +	if (!gic_prio_masking_enabled() && !has_v3_3_nmi())
> +		return;
> +
> +	ppi_nmi_refs = kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERNEL);
> +	if (!ppi_nmi_refs)
> +		return;
> +
> +	for (i = 0; i < gic_data.ppi_nr; i++)
> +		refcount_set(&ppi_nmi_refs[i], 0);
> +
> +	/*
> +	 * Initialize pseudo-NMIs only if GIC driver cannot take advantage
> +	 * of core (FEAT_NMI) and GIC (FEAT_GICv3_NMI) in HW
> +	 */
> +	if (!has_v3_3_nmi())
> +		gic_enable_pseudo_nmis();
>  
>  	if (static_branch_likely(&supports_deactivate_key))
>  		gic_eoimode1_chip.flags |= IRQCHIP_SUPPORTS_NMI;
> @@ -1872,6 +1968,7 @@ static int __init gic_init_bases(void __iomem *dist_base,
>  	irq_domain_update_bus_token(gic_data.domain, DOMAIN_BUS_WIRED);
>  
>  	gic_data.has_rss = !!(typer & GICD_TYPER_RSS);
> +	gic_data.has_nmi = !!(typer & GICD_TYPER_NMI);
>  
>  	if (typer & GICD_TYPER_MBIS) {
>  		err = mbi_init(handle, gic_data.domain);
> @@ -1881,6 +1978,8 @@ static int __init gic_init_bases(void __iomem *dist_base,
>  
>  	set_handle_irq(gic_handle_irq);
>  
> +	gic_setup_nmi_handler();
> +
>  	gic_update_rdist_properties();
>  
>  	gic_dist_init();
> diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h
> index 728691365464..3306456c135f 100644
> --- a/include/linux/irqchip/arm-gic-v3.h
> +++ b/include/linux/irqchip/arm-gic-v3.h
> @@ -30,6 +30,7 @@
>  #define GICD_ICFGR			0x0C00
>  #define GICD_IGRPMODR			0x0D00
>  #define GICD_NSACR			0x0E00
> +#define GICD_INMIR			0x0F80
>  #define GICD_IGROUPRnE			0x1000
>  #define GICD_ISENABLERnE		0x1200
>  #define GICD_ICENABLERnE		0x1400
> @@ -39,6 +40,7 @@
>  #define GICD_ICACTIVERnE		0x1C00
>  #define GICD_IPRIORITYRnE		0x2000
>  #define GICD_ICFGRnE			0x3000
> +#define GICD_INMIRnE			0x3B00
>  #define GICD_IROUTER			0x6000
>  #define GICD_IROUTERnE			0x8000
>  #define GICD_IDREGS			0xFFD0
> @@ -83,6 +85,7 @@
>  #define GICD_TYPER_LPIS			(1U << 17)
>  #define GICD_TYPER_MBIS			(1U << 16)
>  #define GICD_TYPER_ESPI			(1U << 8)
> +#define GICD_TYPER_NMI			(1U << 9)
>  
>  #define GICD_TYPER_ID_BITS(typer)	((((typer) >> 19) & 0x1f) + 1)
>  #define GICD_TYPER_NUM_LPIS(typer)	((((typer) >> 11) & 0x1f) + 1)
> @@ -238,6 +241,7 @@
>  #define GICR_ICFGR0			GICD_ICFGR
>  #define GICR_IGRPMODR0			GICD_IGRPMODR
>  #define GICR_NSACR			GICD_NSACR
> +#define GICR_INMIR0			GICD_INMIR
>  
>  #define GICR_TYPER_PLPIS		(1U << 0)
>  #define GICR_TYPER_VLPIS		(1U << 1)

Otherwise looks reasonable.

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 14/14] irqchip/gic-v3: Implement FEAT_GICv3_NMI support
@ 2022-12-07 15:20     ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 15:20 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, 12 Nov 2022 15:17:08 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> From: Lorenzo Pieralisi <lpieralisi@kernel.org>
> 
> The FEAT_GICv3_NMI GIC feature coupled with the CPU FEAT_NMI enables
> handling NMI interrupts in HW on aarch64, by adding a superpriority
> interrupt to the existing GIC priority scheme.
> 
> Implement GIC driver support for the FEAT_GICv3_NMI feature.
> 
> Rename gic_supports_nmi() helper function to gic_supports_pseudo_nmis()
> to make the pseudo NMIs code path clearer and more explicit.

Please make this particular change a separate patch. It will make it a
lot clearer what is the added logic. And maybe drop the final 's' in
gic_supports_pseudo_nmis.

> 
> Check, through the ARM64 capabilitity infrastructure, if support
> for FEAT_NMI was detected on the core and the system has not overridden
> the detection and forced pseudo-NMIs enablement.
> 
> If FEAT_NMI is detected, it was not overridden (check embedded in the
> system_uses_nmi() call) and the GIC supports the FEAT_GICv3_NMI feature,
> install an NMI handler and initialize NMIs related HW GIC registers.
> 
> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org>
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  drivers/irqchip/irq-gic-v3.c       | 143 ++++++++++++++++++++++++-----
>  include/linux/irqchip/arm-gic-v3.h |   4 +
>  2 files changed, 125 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> index 34d58567b78d..dc45e1093e7b 100644
> --- a/drivers/irqchip/irq-gic-v3.c
> +++ b/drivers/irqchip/irq-gic-v3.c
> @@ -54,6 +54,7 @@ struct gic_chip_data {
>  	u32			nr_redist_regions;
>  	u64			flags;
>  	bool			has_rss;
> +	bool			has_nmi;
>  	unsigned int		ppi_nr;
>  	struct partition_desc	**ppi_descs;
>  };
> @@ -145,6 +146,20 @@ enum gic_intid_range {
>  	__INVALID_RANGE__
>  };
>  
> +#ifdef CONFIG_ARM64
> +#include <asm/cpufeature.h>
> +
> +static inline bool has_v3_3_nmi(void)

For consistency, something along the lines of 'gic_supports_v3_3_nmi'
would be better. And drop the inline which the compiler should be able
to figure out on its own.

Also consider placing all the arm64-special stuff under the same
#define (we already have one for some ugly Cavium crap).

> +{
> +	return gic_data.has_nmi && system_uses_nmi();
> +}
> +#else
> +static inline bool has_v3_3_nmi(void)
> +{
> +	return false;
> +}
> +#endif
> +
>  static enum gic_intid_range __get_intid_range(irq_hw_number_t hwirq)
>  {
>  	switch (hwirq) {
> @@ -350,6 +365,42 @@ static int gic_peek_irq(struct irq_data *d, u32 offset)
>  	return !!(readl_relaxed(base + offset + (index / 32) * 4) & mask);
>  }
>  
> +static DEFINE_RAW_SPINLOCK(irq_controller_lock);

Move this up together with the rest of the static data. And maybe call
it gic_nmi_lock so that we know what it protects.

> +
> +static void gic_irq_configure_nmi(struct irq_data *d, bool enable)
> +{
> +	void __iomem *base, *addr;
> +	u32 offset, index, mask, val;
> +
> +	offset = convert_offset_index(d, GICD_INMIR, &index);
> +	mask = 1 << (index % 32);
> +
> +	if (gic_irq_in_rdist(d))
> +		base = gic_data_rdist_sgi_base();
> +	else
> +		base = gic_data.dist_base;
> +
> +	addr = base + offset + (index / 32) * 4;
> +
> +	raw_spin_lock(&irq_controller_lock);
> +
> +	val = readl_relaxed(addr);
> +	val = enable ? (val | mask) : (val & ~mask);

If you make val an unsigned long, you can write this as:

	__assign_bit(index % 32, &val, enable);

and then you can drop the mask.

> +	writel_relaxed(val, addr);
> +
> +	raw_spin_unlock(&irq_controller_lock);
> +}
> +
> +static void gic_irq_enable_nmi(struct irq_data *d)
> +{
> +	gic_irq_configure_nmi(d, true);
> +}
> +
> +static void gic_irq_disable_nmi(struct irq_data *d)
> +{
> +	gic_irq_configure_nmi(d, false);
> +}
> +
>  static void gic_poke_irq(struct irq_data *d, u32 offset)
>  {
>  	void __iomem *base;
> @@ -395,7 +446,7 @@ static void gic_unmask_irq(struct irq_data *d)
>  	gic_poke_irq(d, GICD_ISENABLER);
>  }
>  
> -static inline bool gic_supports_nmi(void)
> +static inline bool gic_supports_pseudo_nmis(void)
>  {
>  	return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) &&
>  	       static_branch_likely(&supports_pseudo_nmis);
> @@ -491,7 +542,7 @@ static int gic_irq_nmi_setup(struct irq_data *d)
>  {
>  	struct irq_desc *desc = irq_to_desc(d->irq);
>  
> -	if (!gic_supports_nmi())
> +	if (!gic_supports_pseudo_nmis() && !has_v3_3_nmi())
>  		return -EINVAL;
>  
>  	if (gic_peek_irq(d, GICD_ISENABLER)) {
> @@ -519,7 +570,10 @@ static int gic_irq_nmi_setup(struct irq_data *d)
>  		desc->handle_irq = handle_fasteoi_nmi;
>  	}
>  
> -	gic_irq_set_prio(d, GICD_INT_NMI_PRI);
> +	if (has_v3_3_nmi())
> +		gic_irq_enable_nmi(d);
> +	else
> +		gic_irq_set_prio(d, GICD_INT_NMI_PRI);
>  
>  	return 0;
>  }
> @@ -528,7 +582,7 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
>  {
>  	struct irq_desc *desc = irq_to_desc(d->irq);
>  
> -	if (WARN_ON(!gic_supports_nmi()))
> +	if (WARN_ON(!gic_supports_pseudo_nmis() && !has_v3_3_nmi()))
>  		return;
>  
>  	if (gic_peek_irq(d, GICD_ISENABLER)) {
> @@ -554,7 +608,10 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
>  		desc->handle_irq = handle_fasteoi_irq;
>  	}
>  
> -	gic_irq_set_prio(d, GICD_INT_DEF_PRI);
> +	if (has_v3_3_nmi())
> +		gic_irq_disable_nmi(d);
> +	else
> +		gic_irq_set_prio(d, GICD_INT_DEF_PRI);
>  }
>  
>  static void gic_eoi_irq(struct irq_data *d)
> @@ -674,7 +731,7 @@ static inline void gic_complete_ack(u32 irqnr)
>  
>  static bool gic_rpr_is_nmi_prio(void)
>  {
> -	if (!gic_supports_nmi())
> +	if (!gic_supports_pseudo_nmis())
>  		return false;
>  
>  	return unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI));
> @@ -706,7 +763,8 @@ static void __gic_handle_nmi(u32 irqnr, struct pt_regs *regs)
>  	gic_complete_ack(irqnr);
>  
>  	if (generic_handle_domain_nmi(gic_data.domain, irqnr)) {
> -		WARN_ONCE(true, "Unexpected pseudo-NMI (irqnr %u)\n", irqnr);
> +		WARN_ONCE(true, "Unexpected %sNMI (irqnr %u)\n",
> +			  gic_supports_pseudo_nmis() ? "pseudo-" : "", irqnr);
>  		gic_deactivate_unhandled(irqnr);
>  	}
>  }
> @@ -782,9 +840,37 @@ static void __gic_handle_irq_from_irqsoff(struct pt_regs *regs)
>  	__gic_handle_nmi(irqnr, regs);
>  }
>  
> +#ifdef CONFIG_ARM64
> +static inline u64 gic_read_nmiar(void)
> +{
> +	u64 irqstat;
> +
> +	irqstat = read_sysreg_s(SYS_ICC_NMIAR1_EL1);
> +
> +	dsb(sy);
> +
> +	return irqstat;
> +}
> +
> +static asmlinkage void __exception_irq_entry gic_handle_nmi_irq(struct pt_regs *regs)

I think this asmlinkage has been cargo-culted for a long time, and
isn't relevant anymore, as we don't get here directly from some
assembler code.

> +{
> +	u32 irqnr = gic_read_nmiar();

The only reason we indirect reads of IAR are for the sake of
AArch32. Since we don't support NMIs for this architecture, and that
this code is entirely behind a #ifdef, just inline the read of
NMIAIR1_EL1 here.

> +
> +	__gic_handle_nmi(irqnr, regs);
> +}
> +
> +static inline void gic_setup_nmi_handler(void)
> +{
> +	if (has_v3_3_nmi())
> +		set_handle_nmi_irq(gic_handle_nmi_irq);
> +}
> +#else
> +static inline void gic_setup_nmi_handler(void) { }
> +#endif
> +
>  static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
>  {
> -	if (unlikely(gic_supports_nmi() && !interrupts_enabled(regs)))
> +	if (unlikely(gic_supports_pseudo_nmis() && !interrupts_enabled(regs)))
>  		__gic_handle_irq_from_irqsoff(regs);
>  	else
>  		__gic_handle_irq_from_irqson(regs);
> @@ -1072,7 +1158,7 @@ static void gic_cpu_sys_reg_init(void)
>  	/* Set priority mask register */
>  	if (!gic_prio_masking_enabled()) {
>  		write_gicreg(DEFAULT_PMR_VALUE, ICC_PMR_EL1);
> -	} else if (gic_supports_nmi()) {
> +	} else if (gic_supports_pseudo_nmis()) {
>  		/*
>  		 * Mismatch configuration with boot CPU, the system is likely
>  		 * to die as interrupt masking will not work properly on all
> @@ -1753,20 +1839,8 @@ static const struct gic_quirk gic_quirks[] = {
>  	}
>  };
>  
> -static void gic_enable_nmi_support(void)
> +static void gic_enable_pseudo_nmis(void)
>  {
> -	int i;
> -
> -	if (!gic_prio_masking_enabled())
> -		return;
> -
> -	ppi_nmi_refs = kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERNEL);
> -	if (!ppi_nmi_refs)
> -		return;
> -
> -	for (i = 0; i < gic_data.ppi_nr; i++)
> -		refcount_set(&ppi_nmi_refs[i], 0);
> -
>  	/*
>  	 * Linux itself doesn't use 1:N distribution, so has no need to
>  	 * set PMHE. The only reason to have it set is if EL3 requires it
> @@ -1809,6 +1883,28 @@ static void gic_enable_nmi_support(void)
>  		static_branch_enable(&gic_nonsecure_priorities);
>  
>  	static_branch_enable(&supports_pseudo_nmis);
> +}
> +
> +static void gic_enable_nmi_support(void)
> +{
> +	int i;
> +
> +	if (!gic_prio_masking_enabled() && !has_v3_3_nmi())
> +		return;
> +
> +	ppi_nmi_refs = kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERNEL);
> +	if (!ppi_nmi_refs)
> +		return;
> +
> +	for (i = 0; i < gic_data.ppi_nr; i++)
> +		refcount_set(&ppi_nmi_refs[i], 0);
> +
> +	/*
> +	 * Initialize pseudo-NMIs only if GIC driver cannot take advantage
> +	 * of core (FEAT_NMI) and GIC (FEAT_GICv3_NMI) in HW
> +	 */
> +	if (!has_v3_3_nmi())
> +		gic_enable_pseudo_nmis();
>  
>  	if (static_branch_likely(&supports_deactivate_key))
>  		gic_eoimode1_chip.flags |= IRQCHIP_SUPPORTS_NMI;
> @@ -1872,6 +1968,7 @@ static int __init gic_init_bases(void __iomem *dist_base,
>  	irq_domain_update_bus_token(gic_data.domain, DOMAIN_BUS_WIRED);
>  
>  	gic_data.has_rss = !!(typer & GICD_TYPER_RSS);
> +	gic_data.has_nmi = !!(typer & GICD_TYPER_NMI);
>  
>  	if (typer & GICD_TYPER_MBIS) {
>  		err = mbi_init(handle, gic_data.domain);
> @@ -1881,6 +1978,8 @@ static int __init gic_init_bases(void __iomem *dist_base,
>  
>  	set_handle_irq(gic_handle_irq);
>  
> +	gic_setup_nmi_handler();
> +
>  	gic_update_rdist_properties();
>  
>  	gic_dist_init();
> diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h
> index 728691365464..3306456c135f 100644
> --- a/include/linux/irqchip/arm-gic-v3.h
> +++ b/include/linux/irqchip/arm-gic-v3.h
> @@ -30,6 +30,7 @@
>  #define GICD_ICFGR			0x0C00
>  #define GICD_IGRPMODR			0x0D00
>  #define GICD_NSACR			0x0E00
> +#define GICD_INMIR			0x0F80
>  #define GICD_IGROUPRnE			0x1000
>  #define GICD_ISENABLERnE		0x1200
>  #define GICD_ICENABLERnE		0x1400
> @@ -39,6 +40,7 @@
>  #define GICD_ICACTIVERnE		0x1C00
>  #define GICD_IPRIORITYRnE		0x2000
>  #define GICD_ICFGRnE			0x3000
> +#define GICD_INMIRnE			0x3B00
>  #define GICD_IROUTER			0x6000
>  #define GICD_IROUTERnE			0x8000
>  #define GICD_IDREGS			0xFFD0
> @@ -83,6 +85,7 @@
>  #define GICD_TYPER_LPIS			(1U << 17)
>  #define GICD_TYPER_MBIS			(1U << 16)
>  #define GICD_TYPER_ESPI			(1U << 8)
> +#define GICD_TYPER_NMI			(1U << 9)
>  
>  #define GICD_TYPER_ID_BITS(typer)	((((typer) >> 19) & 0x1f) + 1)
>  #define GICD_TYPER_NUM_LPIS(typer)	((((typer) >> 11) & 0x1f) + 1)
> @@ -238,6 +241,7 @@
>  #define GICR_ICFGR0			GICD_ICFGR
>  #define GICR_IGRPMODR0			GICD_IGRPMODR
>  #define GICR_NSACR			GICD_NSACR
> +#define GICR_INMIR0			GICD_INMIR
>  
>  #define GICR_TYPER_PLPIS		(1U << 0)
>  #define GICR_TYPER_VLPIS		(1U << 1)

Otherwise looks reasonable.

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 12/14] arm64/nmi: Add handling of superpriority interrupts as NMIs
  2022-12-07 13:24       ` Mark Brown
@ 2022-12-07 18:57         ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 18:57 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Wed, 07 Dec 2022 13:24:19 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> On Wed, Dec 07, 2022 at 11:03:26AM +0000, Marc Zyngier wrote:
> > Mark Brown <broonie@kernel.org> wrote:
> 
> > > +int set_handle_nmi_irq(void (*handle_irq)(struct pt_regs *));
> > > +int set_handle_nmi_fiq(void (*handle_fiq)(struct pt_regs *));
> 
> > I'm not overly keen on adding hooks that are not used, and I can't
> > really foresee a use case for a FIQ NMI at the moment (there is no
> > plan to use Group-0 interrupts in VMs when the GIC is enabled, and the
> > only interrupt controller we have that uses FIQ doesn't even have
> > priorities, let alone NMIs).
> 
> Sure, I don't care either way - I wasn't sure if people would prefer
> symmetry/completeness or minimal usage so took a guess.  I did consider
> that the FIQ user might decide to implement NMIs on the basis that
> they're easier to use than priorities but it's five minutes work to add
> the API back when needed if that does happen.

The FIQ user doesn't even have such concept in the interrupt
controller, nor does it have the corresponding CPU feature. Let's keep
it minimal for now. As you said, bringing it back is pretty easy.

> 
> > > +			/*
> > > +			 * Any system with FEAT_NMI should not be
> > > +			 * affected by Spectre v2 so we don't mitigate
> > > +			 * here.
> > > +			 */
> 
> > Why? I don't see a good reason not to mitigate it, specially when the
> > mitigation is guarded by cpus_have_const_cap(ARM64_SPECTRE_V2). Maybe
> > you can explain what the rationale is for this.
> 
> Any CPU new enough to have FEAT_NMI is architecturally required to also
> have FEAT_CSV2 since that's mandatory since v8.5 and FEAT_NMI is a v8.8
> feature.  FEAT_CSV2 means the hardware doesn't need the mitigation, and
> we check for it in spectre_v2_get_cpu_hw_mitigation_state().  I was
> trying to thread the needle between doing it for a combination of
> symmetry and defensive programming and people seeing that the test would
> always be false and should therefore be removed, especially in a hot
> path like this. 

"Hypothetically", CPUs that advertise CSV2 could subsequently be found
to actually require extra handling, and I really wouldn't take such a
bet.

The reasoning by which CPU designers follow the ARM feature dependency
rules doesn't hold any water either, and hasn't for years (ARM itself
has been backporting features into CPUs that have a much older base
architecture). You don't have to look very far to find implementations
that cherry-pick whatever they want. The sad reality is that nobody
gives a damn about this rule, and ultimately pick whatever they see
fit.

And given that this is only one static branch away, that the runtime
cost is likely to be a big fat zero for non-affected platforms, for an
event that is vanishingly rare anyway, I'd rather we stay consistent
in the whole interrupt path and keep the mitigation code in.

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 12/14] arm64/nmi: Add handling of superpriority interrupts as NMIs
@ 2022-12-07 18:57         ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 18:57 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Wed, 07 Dec 2022 13:24:19 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> On Wed, Dec 07, 2022 at 11:03:26AM +0000, Marc Zyngier wrote:
> > Mark Brown <broonie@kernel.org> wrote:
> 
> > > +int set_handle_nmi_irq(void (*handle_irq)(struct pt_regs *));
> > > +int set_handle_nmi_fiq(void (*handle_fiq)(struct pt_regs *));
> 
> > I'm not overly keen on adding hooks that are not used, and I can't
> > really foresee a use case for a FIQ NMI at the moment (there is no
> > plan to use Group-0 interrupts in VMs when the GIC is enabled, and the
> > only interrupt controller we have that uses FIQ doesn't even have
> > priorities, let alone NMIs).
> 
> Sure, I don't care either way - I wasn't sure if people would prefer
> symmetry/completeness or minimal usage so took a guess.  I did consider
> that the FIQ user might decide to implement NMIs on the basis that
> they're easier to use than priorities but it's five minutes work to add
> the API back when needed if that does happen.

The FIQ user doesn't even have such concept in the interrupt
controller, nor does it have the corresponding CPU feature. Let's keep
it minimal for now. As you said, bringing it back is pretty easy.

> 
> > > +			/*
> > > +			 * Any system with FEAT_NMI should not be
> > > +			 * affected by Spectre v2 so we don't mitigate
> > > +			 * here.
> > > +			 */
> 
> > Why? I don't see a good reason not to mitigate it, specially when the
> > mitigation is guarded by cpus_have_const_cap(ARM64_SPECTRE_V2). Maybe
> > you can explain what the rationale is for this.
> 
> Any CPU new enough to have FEAT_NMI is architecturally required to also
> have FEAT_CSV2 since that's mandatory since v8.5 and FEAT_NMI is a v8.8
> feature.  FEAT_CSV2 means the hardware doesn't need the mitigation, and
> we check for it in spectre_v2_get_cpu_hw_mitigation_state().  I was
> trying to thread the needle between doing it for a combination of
> symmetry and defensive programming and people seeing that the test would
> always be false and should therefore be removed, especially in a hot
> path like this. 

"Hypothetically", CPUs that advertise CSV2 could subsequently be found
to actually require extra handling, and I really wouldn't take such a
bet.

The reasoning by which CPU designers follow the ARM feature dependency
rules doesn't hold any water either, and hasn't for years (ARM itself
has been backporting features into CPUs that have a much older base
architecture). You don't have to look very far to find implementations
that cherry-pick whatever they want. The sad reality is that nobody
gives a damn about this rule, and ultimately pick whatever they see
fit.

And given that this is only one static branch away, that the runtime
cost is likely to be a big fat zero for non-affected platforms, for an
event that is vanishingly rare anyway, I'd rather we stay consistent
in the whole interrupt path and keep the mitigation code in.

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 09/14] KVM: arm64: Hide FEAT_NMI from guests
  2022-12-05 19:03       ` Mark Brown
@ 2022-12-07 19:03         ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 19:03 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Mon, 05 Dec 2022 19:03:50 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> [1  <text/plain; us-ascii (7bit)>]
> On Mon, Dec 05, 2022 at 06:06:24PM +0000, Marc Zyngier wrote:
> > Mark Brown <broonie@kernel.org> wrote:
> 
> > > FEAT_NMI is not yet useful to guests pending implementation of vGIC
> > > support. Mask out the feature from the ID register and prevent guests
> > > creating state in ALLINT.ALLINT by activating the trap on write provided
> > > in HCRX_EL2.TALLINT when they are running. There is no trap available
> > > for reads from ALLINT.
> 
> > > We do not need to check for FEAT_HCRX since it is mandatory since v8.7
> > > and FEAT_NMI is a v8.8 feature.
> 
> > And yet you check for it in hyp-stub.S after having checked for
> > FEAT_NMI. What gives?
> 
> Being aware that you have a strong preference for not having safety
> checks for mandatory features I didn't add any here but noted it so
> people could see why they were omitted.  The checks in hyp-stub.S were
> probably written before I'd checked the dependency situation out.
> 
> I can remove those checks if preferred but TBH given that the failure
> mode in hyp-stub.S is typically going to be to die with no output if
> something goes wrong it does feel like it's worth the extra couple of
> instructions to double check things just in case, especially with the
> virtual platforms being so easy to misconfigure.

I'm not hell bent on it, and if we can spot the issue early, that's
fine by me. But let's then disable the feature if the implementation
lacks some essential dependencies.

A simple check on ID_AA64MMFR1_EL1.HCX when detecting FEAT_NMI should
do the trick.

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 09/14] KVM: arm64: Hide FEAT_NMI from guests
@ 2022-12-07 19:03         ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 19:03 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Mon, 05 Dec 2022 19:03:50 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> [1  <text/plain; us-ascii (7bit)>]
> On Mon, Dec 05, 2022 at 06:06:24PM +0000, Marc Zyngier wrote:
> > Mark Brown <broonie@kernel.org> wrote:
> 
> > > FEAT_NMI is not yet useful to guests pending implementation of vGIC
> > > support. Mask out the feature from the ID register and prevent guests
> > > creating state in ALLINT.ALLINT by activating the trap on write provided
> > > in HCRX_EL2.TALLINT when they are running. There is no trap available
> > > for reads from ALLINT.
> 
> > > We do not need to check for FEAT_HCRX since it is mandatory since v8.7
> > > and FEAT_NMI is a v8.8 feature.
> 
> > And yet you check for it in hyp-stub.S after having checked for
> > FEAT_NMI. What gives?
> 
> Being aware that you have a strong preference for not having safety
> checks for mandatory features I didn't add any here but noted it so
> people could see why they were omitted.  The checks in hyp-stub.S were
> probably written before I'd checked the dependency situation out.
> 
> I can remove those checks if preferred but TBH given that the failure
> mode in hyp-stub.S is typically going to be to die with no output if
> something goes wrong it does feel like it's worth the extra couple of
> instructions to double check things just in case, especially with the
> virtual platforms being so easy to misconfigure.

I'm not hell bent on it, and if we can spot the issue early, that's
fine by me. But let's then disable the feature if the implementation
lacks some essential dependencies.

A simple check on ID_AA64MMFR1_EL1.HCX when detecting FEAT_NMI should
do the trick.

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 08/14] arm64/cpufeature: Detect PE support for FEAT_NMI
  2022-12-05 19:32       ` Mark Brown
@ 2022-12-07 19:06         ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 19:06 UTC (permalink / raw)
  To: Mark Brown, Suzuki K Poulose
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Mon, 05 Dec 2022 19:32:01 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> On Mon, Dec 05, 2022 at 06:03:07PM +0000, Marc Zyngier wrote:
> > Mark Brown <broonie@kernel.org> wrote:
> 
> > > +#ifdef CONFIG_ARM64_NMI
> > > +	{
> > > +		.desc = "Non-maskable Interrupts",
> > > +		.capability = ARM64_HAS_NMI,
> > > +		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
> 
> > PSEUDO_NMI uses ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE. What is the
> > rational for using a different policy here?
> 
> I couldn't identify any issues that the kernel would have if the feature
> was present in the hardware but unused so I didn't see the need to be
> additionally restrictive.  TBH I'm not 100% clear why the _STRICT is
> there for pseudo NMIs, it seemed a bit out of scope for this series to
> try to clean that up though.

Suzuki is your man for this, adding him to the party.

> 
> > The whole thing is way too restrictive: KVM definitely needs to know
> > that the feature exists, even if there is no use for it in the host
> > kernel. There is no reason why guests shouldn't be able to use this
> > even if the host doesn't care about it.
> 
> > Which means you need two properties: one that advertises the
> > availability of the feature, and one that makes use of it in the
> > kernel.
> 
> To be clear I think what you're looking for here is a capability that
> omits the cross-check with pseudo NMIs rather than something that's
> strictly checking the hardware (so ID register overrides will still
> apply)?  I've done that locally, my tree currently has capabilites
> HAS_NMI and USES_NMI.

Something like that, yes. And HAS_NMI should be unconditionally
enabled (command-line overrides notwithstanding).

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 08/14] arm64/cpufeature: Detect PE support for FEAT_NMI
@ 2022-12-07 19:06         ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 19:06 UTC (permalink / raw)
  To: Mark Brown, Suzuki K Poulose
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Mon, 05 Dec 2022 19:32:01 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> On Mon, Dec 05, 2022 at 06:03:07PM +0000, Marc Zyngier wrote:
> > Mark Brown <broonie@kernel.org> wrote:
> 
> > > +#ifdef CONFIG_ARM64_NMI
> > > +	{
> > > +		.desc = "Non-maskable Interrupts",
> > > +		.capability = ARM64_HAS_NMI,
> > > +		.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
> 
> > PSEUDO_NMI uses ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE. What is the
> > rational for using a different policy here?
> 
> I couldn't identify any issues that the kernel would have if the feature
> was present in the hardware but unused so I didn't see the need to be
> additionally restrictive.  TBH I'm not 100% clear why the _STRICT is
> there for pseudo NMIs, it seemed a bit out of scope for this series to
> try to clean that up though.

Suzuki is your man for this, adding him to the party.

> 
> > The whole thing is way too restrictive: KVM definitely needs to know
> > that the feature exists, even if there is no use for it in the host
> > kernel. There is no reason why guests shouldn't be able to use this
> > even if the host doesn't care about it.
> 
> > Which means you need two properties: one that advertises the
> > availability of the feature, and one that makes use of it in the
> > kernel.
> 
> To be clear I think what you're looking for here is a capability that
> omits the cross-check with pseudo NMIs rather than something that's
> strictly checking the hardware (so ID register overrides will still
> apply)?  I've done that locally, my tree currently has capabilites
> HAS_NMI and USES_NMI.

Something like that, yes. And HAS_NMI should be unconditionally
enabled (command-line overrides notwithstanding).

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 05/14] arm64/asm: Introduce assembly macros for managing ALLINT
  2022-12-05 18:24       ` Mark Brown
@ 2022-12-07 19:14         ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 19:14 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Mon, 05 Dec 2022 18:24:09 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> On Mon, Dec 05, 2022 at 05:29:54PM +0000, Marc Zyngier wrote:
> > On Sat, 12 Nov 2022 15:16:59 +0000,
> > Mark Brown <broonie@kernel.org> wrote:
> 
> > > In order to allow assembly code to ensure that not even superpriorty
> > > interrupts can preempt it provide macros for enabling and disabling
> >                         ^
> >                         \ Insert comma here
> 
> That would give ...not even superpriority interrupts can preempt, it"
> which doesn't make sense to me?

Well, clearly the ^ is misaligned, and should probably(?) be after the
'it'. Try reading the sentence out loud, only taking a breath when you
encounter a punctuation sign. That should give you an idea of what is
missing...

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 05/14] arm64/asm: Introduce assembly macros for managing ALLINT
@ 2022-12-07 19:14         ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 19:14 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Mon, 05 Dec 2022 18:24:09 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> On Mon, Dec 05, 2022 at 05:29:54PM +0000, Marc Zyngier wrote:
> > On Sat, 12 Nov 2022 15:16:59 +0000,
> > Mark Brown <broonie@kernel.org> wrote:
> 
> > > In order to allow assembly code to ensure that not even superpriorty
> > > interrupts can preempt it provide macros for enabling and disabling
> >                         ^
> >                         \ Insert comma here
> 
> That would give ...not even superpriority interrupts can preempt, it"
> which doesn't make sense to me?

Well, clearly the ^ is misaligned, and should probably(?) be after the
'it'. Try reading the sentence out loud, only taking a breath when you
encounter a punctuation sign. That should give you an idea of what is
missing...

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 12/14] arm64/nmi: Add handling of superpriority interrupts as NMIs
  2022-12-07 18:57         ` Marc Zyngier
@ 2022-12-07 19:15           ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-07 19:15 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm


[-- Attachment #1.1: Type: text/plain, Size: 1638 bytes --]

On Wed, Dec 07, 2022 at 06:57:32PM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:

> > Any CPU new enough to have FEAT_NMI is architecturally required to also
> > have FEAT_CSV2 since that's mandatory since v8.5 and FEAT_NMI is a v8.8
> > feature.  FEAT_CSV2 means the hardware doesn't need the mitigation, and

> "Hypothetically", CPUs that advertise CSV2 could subsequently be found
> to actually require extra handling, and I really wouldn't take such a
> bet.

> The reasoning by which CPU designers follow the ARM feature dependency
> rules doesn't hold any water either, and hasn't for years (ARM itself
> has been backporting features into CPUs that have a much older base
> architecture). You don't have to look very far to find implementations
> that cherry-pick whatever they want. The sad reality is that nobody
> gives a damn about this rule, and ultimately pick whatever they see
> fit.

My guess would be that the Spectre stuff is generally considered
sufficiently important that it'd also get mitigated but as you say you
never know.

> And given that this is only one static branch away, that the runtime
> cost is likely to be a big fat zero for non-affected platforms, for an
> event that is vanishingly rare anyway, I'd rather we stay consistent
> in the whole interrupt path and keep the mitigation code in.

Yeah, that's certainly a valid argument and I do tend to agree that it's
better defensive programming - like I said I was trying to thread a
needle between the two anticipated review reactions.  I'll hold off for
now in case anyone else has strong opinions in the other direction
though.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 12/14] arm64/nmi: Add handling of superpriority interrupts as NMIs
@ 2022-12-07 19:15           ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-07 19:15 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

[-- Attachment #1: Type: text/plain, Size: 1638 bytes --]

On Wed, Dec 07, 2022 at 06:57:32PM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:

> > Any CPU new enough to have FEAT_NMI is architecturally required to also
> > have FEAT_CSV2 since that's mandatory since v8.5 and FEAT_NMI is a v8.8
> > feature.  FEAT_CSV2 means the hardware doesn't need the mitigation, and

> "Hypothetically", CPUs that advertise CSV2 could subsequently be found
> to actually require extra handling, and I really wouldn't take such a
> bet.

> The reasoning by which CPU designers follow the ARM feature dependency
> rules doesn't hold any water either, and hasn't for years (ARM itself
> has been backporting features into CPUs that have a much older base
> architecture). You don't have to look very far to find implementations
> that cherry-pick whatever they want. The sad reality is that nobody
> gives a damn about this rule, and ultimately pick whatever they see
> fit.

My guess would be that the Spectre stuff is generally considered
sufficiently important that it'd also get mitigated but as you say you
never know.

> And given that this is only one static branch away, that the runtime
> cost is likely to be a big fat zero for non-affected platforms, for an
> event that is vanishingly rare anyway, I'd rather we stay consistent
> in the whole interrupt path and keep the mitigation code in.

Yeah, that's certainly a valid argument and I do tend to agree that it's
better defensive programming - like I said I was trying to thread a
needle between the two anticipated review reactions.  I'll hold off for
now in case anyone else has strong opinions in the other direction
though.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 04/14] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
  2022-12-05 17:11       ` Mark Brown
@ 2022-12-07 19:18         ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 19:18 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Mon, 05 Dec 2022 17:11:38 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> [1  <text/plain; us-ascii (7bit)>]
> On Mon, Dec 05, 2022 at 04:38:53PM +0000, Marc Zyngier wrote:
> > Mark Brown <broonie@kernel.org> wrote:
> 
> > >  }
> > > +
> > >  #endif
> 
> > Spurious change?
> 
> Yes.
> 
> > > +++ b/arch/arm64/include/asm/nmi.h
> > > @@ -0,0 +1,18 @@
> > > +/* SPDX-License-Identifier: GPL-2.0-only */
> 
> > > +static __always_inline void _allint_set(void)
> > > +{
> > > +	asm volatile(__msr_s(SYS_ALLINT_SET, "xzr"));
> > > +}
> 
> > If this *really* must be a separate include file, it should at least
> > directly include its dependencies. My gut feeling is that it would be
> > better placed in daiflags.h.
> 
> Yeah, I was swithering on that.  Some versions of the code have had more
> in here at which point having the separate header made more sense.  I
> think part of the problem here is that we should do some combination of
> renaming daifflags.h or layering an a more abstracted API on top of it,
> putting things that are not DAIF into daifflags.h doesn't feel great.
> 
> > > @@ -126,6 +126,8 @@
> > >   * System registers, organised loosely by encoding but grouped together
> > >   * where the architected name contains an index. e.g. ID_MMFR<n>_EL1.
> > >   */
> > > +#define SYS_ALLINT_CLR			sys_reg(0, 1, 4, 0, 0)
> > > +#define SYS_ALLINT_SET			sys_reg(0, 1, 4, 1, 0)
> 
> > This only covers the immediate versions of ALLINT, and misses the
> > definition for the register version, aka sys_reg(3, 0, 4, 3, 0).
> 
> That is already present upstream, we only need to add the immediate
> versions which the generated header stuff doesn't have any model for
> yet.

Ah, missed that one, thanks.

Out of curiosity, what is missing in the generator to deal with this
stuff?

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 04/14] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
@ 2022-12-07 19:18         ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-07 19:18 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Mon, 05 Dec 2022 17:11:38 +0000,
Mark Brown <broonie@kernel.org> wrote:
> 
> [1  <text/plain; us-ascii (7bit)>]
> On Mon, Dec 05, 2022 at 04:38:53PM +0000, Marc Zyngier wrote:
> > Mark Brown <broonie@kernel.org> wrote:
> 
> > >  }
> > > +
> > >  #endif
> 
> > Spurious change?
> 
> Yes.
> 
> > > +++ b/arch/arm64/include/asm/nmi.h
> > > @@ -0,0 +1,18 @@
> > > +/* SPDX-License-Identifier: GPL-2.0-only */
> 
> > > +static __always_inline void _allint_set(void)
> > > +{
> > > +	asm volatile(__msr_s(SYS_ALLINT_SET, "xzr"));
> > > +}
> 
> > If this *really* must be a separate include file, it should at least
> > directly include its dependencies. My gut feeling is that it would be
> > better placed in daiflags.h.
> 
> Yeah, I was swithering on that.  Some versions of the code have had more
> in here at which point having the separate header made more sense.  I
> think part of the problem here is that we should do some combination of
> renaming daifflags.h or layering an a more abstracted API on top of it,
> putting things that are not DAIF into daifflags.h doesn't feel great.
> 
> > > @@ -126,6 +126,8 @@
> > >   * System registers, organised loosely by encoding but grouped together
> > >   * where the architected name contains an index. e.g. ID_MMFR<n>_EL1.
> > >   */
> > > +#define SYS_ALLINT_CLR			sys_reg(0, 1, 4, 0, 0)
> > > +#define SYS_ALLINT_SET			sys_reg(0, 1, 4, 1, 0)
> 
> > This only covers the immediate versions of ALLINT, and misses the
> > definition for the register version, aka sys_reg(3, 0, 4, 3, 0).
> 
> That is already present upstream, we only need to add the immediate
> versions which the generated header stuff doesn't have any model for
> yet.

Ah, missed that one, thanks.

Out of curiosity, what is missing in the generator to deal with this
stuff?

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 09/14] KVM: arm64: Hide FEAT_NMI from guests
  2022-12-07 19:03         ` Marc Zyngier
@ 2022-12-07 19:33           ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-07 19:33 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm


[-- Attachment #1.1: Type: text/plain, Size: 1674 bytes --]

On Wed, Dec 07, 2022 at 07:03:28PM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:
> > On Mon, Dec 05, 2022 at 06:06:24PM +0000, Marc Zyngier wrote:
> > > Mark Brown <broonie@kernel.org> wrote:

> > > > We do not need to check for FEAT_HCRX since it is mandatory since v8.7
> > > > and FEAT_NMI is a v8.8 feature.

> > > And yet you check for it in hyp-stub.S after having checked for
> > > FEAT_NMI. What gives?

> > Being aware that you have a strong preference for not having safety
> > checks for mandatory features I didn't add any here but noted it so
> > people could see why they were omitted.  The checks in hyp-stub.S were
> > probably written before I'd checked the dependency situation out.

> > I can remove those checks if preferred but TBH given that the failure
> > mode in hyp-stub.S is typically going to be to die with no output if
> > something goes wrong it does feel like it's worth the extra couple of
> > instructions to double check things just in case, especially with the
> > virtual platforms being so easy to misconfigure.

> I'm not hell bent on it, and if we can spot the issue early, that's
> fine by me. But let's then disable the feature if the implementation
> lacks some essential dependencies.

> A simple check on ID_AA64MMFR1_EL1.HCX when detecting FEAT_NMI should
> do the trick.

Hrm, we should really only check if EL2 is implemented since that is a
valid configuration in which HCX is moot, and there's a case for only
checking when we entered the kernel at EL2 since otherwise any traps to
EL2 are not really our problem and I can see why EL2 might not let EL1
know about the feature even if it really should.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 09/14] KVM: arm64: Hide FEAT_NMI from guests
@ 2022-12-07 19:33           ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-07 19:33 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

[-- Attachment #1: Type: text/plain, Size: 1674 bytes --]

On Wed, Dec 07, 2022 at 07:03:28PM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:
> > On Mon, Dec 05, 2022 at 06:06:24PM +0000, Marc Zyngier wrote:
> > > Mark Brown <broonie@kernel.org> wrote:

> > > > We do not need to check for FEAT_HCRX since it is mandatory since v8.7
> > > > and FEAT_NMI is a v8.8 feature.

> > > And yet you check for it in hyp-stub.S after having checked for
> > > FEAT_NMI. What gives?

> > Being aware that you have a strong preference for not having safety
> > checks for mandatory features I didn't add any here but noted it so
> > people could see why they were omitted.  The checks in hyp-stub.S were
> > probably written before I'd checked the dependency situation out.

> > I can remove those checks if preferred but TBH given that the failure
> > mode in hyp-stub.S is typically going to be to die with no output if
> > something goes wrong it does feel like it's worth the extra couple of
> > instructions to double check things just in case, especially with the
> > virtual platforms being so easy to misconfigure.

> I'm not hell bent on it, and if we can spot the issue early, that's
> fine by me. But let's then disable the feature if the implementation
> lacks some essential dependencies.

> A simple check on ID_AA64MMFR1_EL1.HCX when detecting FEAT_NMI should
> do the trick.

Hrm, we should really only check if EL2 is implemented since that is a
valid configuration in which HCX is moot, and there's a case for only
checking when we entered the kernel at EL2 since otherwise any traps to
EL2 are not really our problem and I can see why EL2 might not let EL1
know about the feature even if it really should.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 04/14] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
  2022-12-07 19:18         ` Marc Zyngier
@ 2022-12-07 19:42           ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-07 19:42 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm


[-- Attachment #1.1: Type: text/plain, Size: 969 bytes --]

On Wed, Dec 07, 2022 at 07:18:35PM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:

> > > > +#define SYS_ALLINT_CLR			sys_reg(0, 1, 4, 0, 0)
> > > > +#define SYS_ALLINT_SET			sys_reg(0, 1, 4, 1, 0)

> > > This only covers the immediate versions of ALLINT, and misses the
> > > definition for the register version, aka sys_reg(3, 0, 4, 3, 0).

> > That is already present upstream, we only need to add the immediate
> > versions which the generated header stuff doesn't have any model for
> > yet.

> Ah, missed that one, thanks.

> Out of curiosity, what is missing in the generator to deal with this
> stuff?

We'll need to teach it about registers that don't have any bitfields
defined, at the minute it requires that all the bits in the register are
specified but these don't have anything to specify.  Instead the value
written is part of the register encoding and can they only be used in a
MSR with IIRC only xzr valid as the source register.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 04/14] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT
@ 2022-12-07 19:42           ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-07 19:42 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, Lorenzo Pieralisi, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

[-- Attachment #1: Type: text/plain, Size: 969 bytes --]

On Wed, Dec 07, 2022 at 07:18:35PM +0000, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:

> > > > +#define SYS_ALLINT_CLR			sys_reg(0, 1, 4, 0, 0)
> > > > +#define SYS_ALLINT_SET			sys_reg(0, 1, 4, 1, 0)

> > > This only covers the immediate versions of ALLINT, and misses the
> > > definition for the register version, aka sys_reg(3, 0, 4, 3, 0).

> > That is already present upstream, we only need to add the immediate
> > versions which the generated header stuff doesn't have any model for
> > yet.

> Ah, missed that one, thanks.

> Out of curiosity, what is missing in the generator to deal with this
> stuff?

We'll need to teach it about registers that don't have any bitfields
defined, at the minute it requires that all the bits in the register are
specified but these don't have anything to specify.  Instead the value
written is part of the register encoding and can they only be used in a
MSR with IIRC only xzr valid as the source register.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
  2022-11-12 15:17   ` Mark Brown
@ 2022-12-08 17:19     ` Lorenzo Pieralisi
  -1 siblings, 0 replies; 96+ messages in thread
From: Lorenzo Pieralisi @ 2022-12-08 17:19 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, Nov 12, 2022 at 03:17:04PM +0000, Mark Brown wrote:
> As we do for pseudo NMIs add code to our DAIF management which keeps
> superpriority interrupts unmasked when we have asynchronous exceptions
> enabled. Since superpriority interrupts are not masked through DAIF like
> pseduo NMIs are we also need to modify the assembler macros for managing
> DAIF to ensure that the masking is done in the assembly code. At present
> users of the assembly macros always mask pseudo NMIs.
> 
> There is a difference to the actual handling between pseudo NMIs
> and superpriority interrupts in the assembly save_and_disable_irq and
> restore_irq macros, these cover both interrupts and FIQs using DAIF
> without regard for the use of pseudo NMIs so also mask those but are not
> updated here to mask superpriority interrupts. Given the names it is not
> clear that the behaviour with pseudo NMIs is particularly intentional,
> and in any case these macros are only used in the implementation of
> alternatives for software PAN while hardware PAN has been mandatory
> since v8.1 so it is not anticipated that practical systems with support
> for FEAT_NMI will ever execute the affected code.
> 
> This should be a conservative set of masked regions, we may be able to
> relax this in future, but this should represent a good starting point.

I think I found a nasty spot. We are currently not handling ALLINT in
arch_local_irq_enable/disable(). The issue I am facing is that we might
end up preempting in IRQ context with ALLINT set in the exception path
- arm64_preempt_schedule_irq() - which means we are running with all
IRQs masked (that's normal; what's not normal is that local_irq_enable()
does not clear ALLINT, see below).

When we schedule (preempt_schedule_irq()) we do require a
local_irq_enable() to enable IRQs; ALLINT is still set, so
local_irq_enable() does not do what is expected so we are calling
__schedule() with IRQs disabled, which does not seem right.

Now we need to debate what the fix for this can be but nonetheless
it is something to be addressed.

Clearing and setting ALLINT in arch_local_irq_enable()/disable()
seems to solve the issue (now I moved on to debugging something
else, will post the outcome here because this fix does not seem
to fix the issue completely or I am hitting another bug).

Lorenzo

> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/assembler.h | 11 +++++++++++
>  arch/arm64/include/asm/daifflags.h | 18 ++++++++++++++++++
>  2 files changed, 29 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 88d9779a83c0..e85a7e9af9ae 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -52,19 +52,30 @@ alternative_else_nop_endif
>  
>  	.macro save_and_disable_daif, flags
>  	mrs	\flags, daif
> +        disable_allint
>  	msr	daifset, #0xf
>  	.endm
>  
>  	.macro disable_daif
> +        disable_allint
>  	msr	daifset, #0xf
>  	.endm
>  
>  	.macro enable_daif
>  	msr	daifclr, #0xf
> +	enable_allint
>  	.endm
>  
>  	.macro	restore_daif, flags:req
>  	msr	daif, \flags
> +#ifdef CONFIG_ARM64_NMI
> +alternative_if ARM64_HAS_NMI
> +	/* If async exceptions are unmasked we can take NMIs */
> +	tbnz	\flags, #8, 2004f
> +	msr_s	SYS_ALLINT_CLR, xzr
> +2004:
> +alternative_else_nop_endif
> +#endif
>  	.endm
>  
>  	/* IRQ/FIQ are the lowest priority flags, unconditionally unmask the rest. */
> diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
> index b3bed2004342..fda73976068f 100644
> --- a/arch/arm64/include/asm/daifflags.h
> +++ b/arch/arm64/include/asm/daifflags.h
> @@ -10,6 +10,7 @@
>  #include <asm/arch_gicv3.h>
>  #include <asm/barrier.h>
>  #include <asm/cpufeature.h>
> +#include <asm/nmi.h>
>  #include <asm/ptrace.h>
>  
>  #define DAIF_PROCCTX		0
> @@ -35,6 +36,9 @@ static inline void local_daif_mask(void)
>  	if (system_uses_irq_prio_masking())
>  		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
>  
> +	if (system_uses_nmi())
> +		_allint_set();
> +
>  	trace_hardirqs_off();
>  }
>  
> @@ -50,6 +54,12 @@ static inline unsigned long local_daif_save_flags(void)
>  			flags |= PSR_I_BIT | PSR_F_BIT;
>  	}
>  
> +	if (system_uses_nmi()) {
> +		/* If IRQs are masked with ALLINT, reflect in in the flags */
> +		if (read_sysreg_s(SYS_ALLINT) & ALLINT_ALLINT)
> +			flags |= PSR_I_BIT | PSR_F_BIT;
> +	}
> +
>  	return flags;
>  }
>  
> @@ -114,6 +124,10 @@ static inline void local_daif_restore(unsigned long flags)
>  		gic_write_pmr(pmr);
>  	}
>  
> +	/* If we can take asynchronous errors we can take NMIs */
> +	if (system_uses_nmi() && !(flags & PSR_A_BIT))
> +		_allint_clear();
> +
>  	write_sysreg(flags, daif);
>  
>  	if (irq_disabled)
> @@ -131,6 +145,10 @@ static inline void local_daif_inherit(struct pt_regs *regs)
>  	if (interrupts_enabled(regs))
>  		trace_hardirqs_on();
>  
> +	/* If we can take asynchronous errors we can take NMIs */
> +	if (system_uses_nmi() && !(flags & PSR_A_BIT))
> +		_allint_clear();
> +
>  	if (system_uses_irq_prio_masking())
>  		gic_write_pmr(regs->pmr_save);
>  
> -- 
> 2.30.2
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
@ 2022-12-08 17:19     ` Lorenzo Pieralisi
  0 siblings, 0 replies; 96+ messages in thread
From: Lorenzo Pieralisi @ 2022-12-08 17:19 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, Nov 12, 2022 at 03:17:04PM +0000, Mark Brown wrote:
> As we do for pseudo NMIs add code to our DAIF management which keeps
> superpriority interrupts unmasked when we have asynchronous exceptions
> enabled. Since superpriority interrupts are not masked through DAIF like
> pseduo NMIs are we also need to modify the assembler macros for managing
> DAIF to ensure that the masking is done in the assembly code. At present
> users of the assembly macros always mask pseudo NMIs.
> 
> There is a difference to the actual handling between pseudo NMIs
> and superpriority interrupts in the assembly save_and_disable_irq and
> restore_irq macros, these cover both interrupts and FIQs using DAIF
> without regard for the use of pseudo NMIs so also mask those but are not
> updated here to mask superpriority interrupts. Given the names it is not
> clear that the behaviour with pseudo NMIs is particularly intentional,
> and in any case these macros are only used in the implementation of
> alternatives for software PAN while hardware PAN has been mandatory
> since v8.1 so it is not anticipated that practical systems with support
> for FEAT_NMI will ever execute the affected code.
> 
> This should be a conservative set of masked regions, we may be able to
> relax this in future, but this should represent a good starting point.

I think I found a nasty spot. We are currently not handling ALLINT in
arch_local_irq_enable/disable(). The issue I am facing is that we might
end up preempting in IRQ context with ALLINT set in the exception path
- arm64_preempt_schedule_irq() - which means we are running with all
IRQs masked (that's normal; what's not normal is that local_irq_enable()
does not clear ALLINT, see below).

When we schedule (preempt_schedule_irq()) we do require a
local_irq_enable() to enable IRQs; ALLINT is still set, so
local_irq_enable() does not do what is expected so we are calling
__schedule() with IRQs disabled, which does not seem right.

Now we need to debate what the fix for this can be but nonetheless
it is something to be addressed.

Clearing and setting ALLINT in arch_local_irq_enable()/disable()
seems to solve the issue (now I moved on to debugging something
else, will post the outcome here because this fix does not seem
to fix the issue completely or I am hitting another bug).

Lorenzo

> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/assembler.h | 11 +++++++++++
>  arch/arm64/include/asm/daifflags.h | 18 ++++++++++++++++++
>  2 files changed, 29 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 88d9779a83c0..e85a7e9af9ae 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -52,19 +52,30 @@ alternative_else_nop_endif
>  
>  	.macro save_and_disable_daif, flags
>  	mrs	\flags, daif
> +        disable_allint
>  	msr	daifset, #0xf
>  	.endm
>  
>  	.macro disable_daif
> +        disable_allint
>  	msr	daifset, #0xf
>  	.endm
>  
>  	.macro enable_daif
>  	msr	daifclr, #0xf
> +	enable_allint
>  	.endm
>  
>  	.macro	restore_daif, flags:req
>  	msr	daif, \flags
> +#ifdef CONFIG_ARM64_NMI
> +alternative_if ARM64_HAS_NMI
> +	/* If async exceptions are unmasked we can take NMIs */
> +	tbnz	\flags, #8, 2004f
> +	msr_s	SYS_ALLINT_CLR, xzr
> +2004:
> +alternative_else_nop_endif
> +#endif
>  	.endm
>  
>  	/* IRQ/FIQ are the lowest priority flags, unconditionally unmask the rest. */
> diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
> index b3bed2004342..fda73976068f 100644
> --- a/arch/arm64/include/asm/daifflags.h
> +++ b/arch/arm64/include/asm/daifflags.h
> @@ -10,6 +10,7 @@
>  #include <asm/arch_gicv3.h>
>  #include <asm/barrier.h>
>  #include <asm/cpufeature.h>
> +#include <asm/nmi.h>
>  #include <asm/ptrace.h>
>  
>  #define DAIF_PROCCTX		0
> @@ -35,6 +36,9 @@ static inline void local_daif_mask(void)
>  	if (system_uses_irq_prio_masking())
>  		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
>  
> +	if (system_uses_nmi())
> +		_allint_set();
> +
>  	trace_hardirqs_off();
>  }
>  
> @@ -50,6 +54,12 @@ static inline unsigned long local_daif_save_flags(void)
>  			flags |= PSR_I_BIT | PSR_F_BIT;
>  	}
>  
> +	if (system_uses_nmi()) {
> +		/* If IRQs are masked with ALLINT, reflect in in the flags */
> +		if (read_sysreg_s(SYS_ALLINT) & ALLINT_ALLINT)
> +			flags |= PSR_I_BIT | PSR_F_BIT;
> +	}
> +
>  	return flags;
>  }
>  
> @@ -114,6 +124,10 @@ static inline void local_daif_restore(unsigned long flags)
>  		gic_write_pmr(pmr);
>  	}
>  
> +	/* If we can take asynchronous errors we can take NMIs */
> +	if (system_uses_nmi() && !(flags & PSR_A_BIT))
> +		_allint_clear();
> +
>  	write_sysreg(flags, daif);
>  
>  	if (irq_disabled)
> @@ -131,6 +145,10 @@ static inline void local_daif_inherit(struct pt_regs *regs)
>  	if (interrupts_enabled(regs))
>  		trace_hardirqs_on();
>  
> +	/* If we can take asynchronous errors we can take NMIs */
> +	if (system_uses_nmi() && !(flags & PSR_A_BIT))
> +		_allint_clear();
> +
>  	if (system_uses_irq_prio_masking())
>  		gic_write_pmr(regs->pmr_save);
>  
> -- 
> 2.30.2
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
  2022-12-08 17:19     ` Lorenzo Pieralisi
@ 2022-12-12 14:03       ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-12 14:03 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm


[-- Attachment #1.1: Type: text/plain, Size: 1949 bytes --]

On Thu, Dec 08, 2022 at 06:19:02PM +0100, Lorenzo Pieralisi wrote:

> I think I found a nasty spot. We are currently not handling ALLINT in
> arch_local_irq_enable/disable(). The issue I am facing is that we might
> end up preempting in IRQ context with ALLINT set in the exception path
> - arm64_preempt_schedule_irq() - which means we are running with all
> IRQs masked (that's normal; what's not normal is that local_irq_enable()
> does not clear ALLINT, see below).

Right, and handling ALLINT in arch_local_irq_enable/disable() isn't
exactly ideal since it means that whenever we mask interrupts we also
mask NMIs which somewhat reduces the value.

> When we schedule (preempt_schedule_irq()) we do require a
> local_irq_enable() to enable IRQs; ALLINT is still set, so
> local_irq_enable() does not do what is expected so we are calling
> __schedule() with IRQs disabled, which does not seem right.

> Now we need to debate what the fix for this can be but nonetheless
> it is something to be addressed.

A first pass suggests that we should be handling this like we do for
other preemptions and returning early from arm64_preempt_schedule_irq()
if ALLINT is masked.  If we are handling a regular IRQ then ALLINT will
be unmasked and we'll call into preempt_schedule_irq(), if we're
handling a NMI then ALLINT will still be masked so we don't attempt to
schedule.  I've pushed out a change which does this but not yet properly
tested it.

> Clearing and setting ALLINT in arch_local_irq_enable()/disable()
> seems to solve the issue (now I moved on to debugging something
> else, will post the outcome here because this fix does not seem
> to fix the issue completely or I am hitting another bug).

Do you have any specifics on how you're seeing problems?  You did
mention boot stalls offline but I've not been able to to reproduce this
locally in a way that I can identify (based on your mail now I've made
sure I've got preemption enabled).

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
@ 2022-12-12 14:03       ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-12 14:03 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

[-- Attachment #1: Type: text/plain, Size: 1949 bytes --]

On Thu, Dec 08, 2022 at 06:19:02PM +0100, Lorenzo Pieralisi wrote:

> I think I found a nasty spot. We are currently not handling ALLINT in
> arch_local_irq_enable/disable(). The issue I am facing is that we might
> end up preempting in IRQ context with ALLINT set in the exception path
> - arm64_preempt_schedule_irq() - which means we are running with all
> IRQs masked (that's normal; what's not normal is that local_irq_enable()
> does not clear ALLINT, see below).

Right, and handling ALLINT in arch_local_irq_enable/disable() isn't
exactly ideal since it means that whenever we mask interrupts we also
mask NMIs which somewhat reduces the value.

> When we schedule (preempt_schedule_irq()) we do require a
> local_irq_enable() to enable IRQs; ALLINT is still set, so
> local_irq_enable() does not do what is expected so we are calling
> __schedule() with IRQs disabled, which does not seem right.

> Now we need to debate what the fix for this can be but nonetheless
> it is something to be addressed.

A first pass suggests that we should be handling this like we do for
other preemptions and returning early from arm64_preempt_schedule_irq()
if ALLINT is masked.  If we are handling a regular IRQ then ALLINT will
be unmasked and we'll call into preempt_schedule_irq(), if we're
handling a NMI then ALLINT will still be masked so we don't attempt to
schedule.  I've pushed out a change which does this but not yet properly
tested it.

> Clearing and setting ALLINT in arch_local_irq_enable()/disable()
> seems to solve the issue (now I moved on to debugging something
> else, will post the outcome here because this fix does not seem
> to fix the issue completely or I am hitting another bug).

Do you have any specifics on how you're seeing problems?  You did
mention boot stalls offline but I've not been able to to reproduce this
locally in a way that I can identify (based on your mail now I've made
sure I've got preemption enabled).

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
  2022-11-12 15:17   ` Mark Brown
@ 2022-12-12 14:40     ` Mark Rutland
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Rutland @ 2022-12-12 14:40 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Lorenzo Pieralisi,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, Nov 12, 2022 at 03:17:04PM +0000, Mark Brown wrote:
> As we do for pseudo NMIs add code to our DAIF management which keeps
> superpriority interrupts unmasked when we have asynchronous exceptions
> enabled.

Please, no. NAK to pretending this is part of DAIF.

The existing hacks to bodge pseudo-NMI into the DAIF management code are
convoluted, difficult to maintain, and they have known cases where they
*cannot* do the right thing. Those existing hacks have proved to be more
trouble than they're worth, and continuing down that path makes things worse.

We must clean up the existing approach *before* we add the real NMI support.

As mentioned elsewhere, I think this means reworking the way we manage
exception masks, and at least:

(a) Adding entry-specific helpers to manipulate abstract exception masks
    covering DAIF + PMR + ALLINT. Those need unmask-at-entry and mask-at-exit
    behaviour, and today only need to manage DAIF + PMR.

    It should be possible to do this ahead of ALLINT / NMI support.

(b) Adding new "logical exception mask" helpers that treat DAIF + PMR + ALLINT
    as separate elements.

    This way we can always save+track all elements if we need to (e.g. for
    irqflag tracking), but we never have to fake up a DAIF element.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
@ 2022-12-12 14:40     ` Mark Rutland
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Rutland @ 2022-12-12 14:40 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Lorenzo Pieralisi,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Sat, Nov 12, 2022 at 03:17:04PM +0000, Mark Brown wrote:
> As we do for pseudo NMIs add code to our DAIF management which keeps
> superpriority interrupts unmasked when we have asynchronous exceptions
> enabled.

Please, no. NAK to pretending this is part of DAIF.

The existing hacks to bodge pseudo-NMI into the DAIF management code are
convoluted, difficult to maintain, and they have known cases where they
*cannot* do the right thing. Those existing hacks have proved to be more
trouble than they're worth, and continuing down that path makes things worse.

We must clean up the existing approach *before* we add the real NMI support.

As mentioned elsewhere, I think this means reworking the way we manage
exception masks, and at least:

(a) Adding entry-specific helpers to manipulate abstract exception masks
    covering DAIF + PMR + ALLINT. Those need unmask-at-entry and mask-at-exit
    behaviour, and today only need to manage DAIF + PMR.

    It should be possible to do this ahead of ALLINT / NMI support.

(b) Adding new "logical exception mask" helpers that treat DAIF + PMR + ALLINT
    as separate elements.

    This way we can always save+track all elements if we need to (e.g. for
    irqflag tracking), but we never have to fake up a DAIF element.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
  2022-12-12 14:03       ` Mark Brown
@ 2022-12-13  8:37         ` Lorenzo Pieralisi
  -1 siblings, 0 replies; 96+ messages in thread
From: Lorenzo Pieralisi @ 2022-12-13  8:37 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Mon, Dec 12, 2022 at 02:03:33PM +0000, Mark Brown wrote:
> On Thu, Dec 08, 2022 at 06:19:02PM +0100, Lorenzo Pieralisi wrote:
> 
> > I think I found a nasty spot. We are currently not handling ALLINT in
> > arch_local_irq_enable/disable(). The issue I am facing is that we might
> > end up preempting in IRQ context with ALLINT set in the exception path
> > - arm64_preempt_schedule_irq() - which means we are running with all
> > IRQs masked (that's normal; what's not normal is that local_irq_enable()
> > does not clear ALLINT, see below).
> 
> Right, and handling ALLINT in arch_local_irq_enable/disable() isn't
> exactly ideal since it means that whenever we mask interrupts we also
> mask NMIs which somewhat reduces the value.

Understood but ALLINT should be cleared before scheduling on the
exception path that leads to preemption - where it is done to
be seen.

> > When we schedule (preempt_schedule_irq()) we do require a
> > local_irq_enable() to enable IRQs; ALLINT is still set, so
> > local_irq_enable() does not do what is expected so we are calling
> > __schedule() with IRQs disabled, which does not seem right.
> 
> > Now we need to debate what the fix for this can be but nonetheless
> > it is something to be addressed.
> 
> A first pass suggests that we should be handling this like we do for
> other preemptions and returning early from arm64_preempt_schedule_irq()
> if ALLINT is masked.  If we are handling a regular IRQ then ALLINT will
> be unmasked and we'll call into preempt_schedule_irq(), if we're
> handling a NMI then ALLINT will still be masked so we don't attempt to
> schedule.  I've pushed out a change which does this but not yet properly
> tested it.

Yes that's what should happen (actually if we are handling an NMI we
should not even get to the point where a decision about preemption is
made el1_interrupt() just returns).

> > Clearing and setting ALLINT in arch_local_irq_enable()/disable()
> > seems to solve the issue (now I moved on to debugging something
> > else, will post the outcome here because this fix does not seem
> > to fix the issue completely or I am hitting another bug).
> 
> Do you have any specifics on how you're seeing problems?  You did
> mention boot stalls offline but I've not been able to to reproduce this
> locally in a way that I can identify (based on your mail now I've made
> sure I've got preemption enabled).

defconfig, barebone rootfs, boot stalls (because we are scheduling with
IRQs off and there is nothing clearing ALLINT in the preemption path
so system hangs).

I don't know why you can't reproduce it don't know if it is the Kconfig
or file system configuration (or the FVP params - for this to show up
FEAT_NMI must obviously be enabled - I am testing the branch Marc posted
so that I can test the vGIC patches but this is definitely not a vGIC
bug).

Thanks,
Lorenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
@ 2022-12-13  8:37         ` Lorenzo Pieralisi
  0 siblings, 0 replies; 96+ messages in thread
From: Lorenzo Pieralisi @ 2022-12-13  8:37 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On Mon, Dec 12, 2022 at 02:03:33PM +0000, Mark Brown wrote:
> On Thu, Dec 08, 2022 at 06:19:02PM +0100, Lorenzo Pieralisi wrote:
> 
> > I think I found a nasty spot. We are currently not handling ALLINT in
> > arch_local_irq_enable/disable(). The issue I am facing is that we might
> > end up preempting in IRQ context with ALLINT set in the exception path
> > - arm64_preempt_schedule_irq() - which means we are running with all
> > IRQs masked (that's normal; what's not normal is that local_irq_enable()
> > does not clear ALLINT, see below).
> 
> Right, and handling ALLINT in arch_local_irq_enable/disable() isn't
> exactly ideal since it means that whenever we mask interrupts we also
> mask NMIs which somewhat reduces the value.

Understood but ALLINT should be cleared before scheduling on the
exception path that leads to preemption - where it is done to
be seen.

> > When we schedule (preempt_schedule_irq()) we do require a
> > local_irq_enable() to enable IRQs; ALLINT is still set, so
> > local_irq_enable() does not do what is expected so we are calling
> > __schedule() with IRQs disabled, which does not seem right.
> 
> > Now we need to debate what the fix for this can be but nonetheless
> > it is something to be addressed.
> 
> A first pass suggests that we should be handling this like we do for
> other preemptions and returning early from arm64_preempt_schedule_irq()
> if ALLINT is masked.  If we are handling a regular IRQ then ALLINT will
> be unmasked and we'll call into preempt_schedule_irq(), if we're
> handling a NMI then ALLINT will still be masked so we don't attempt to
> schedule.  I've pushed out a change which does this but not yet properly
> tested it.

Yes that's what should happen (actually if we are handling an NMI we
should not even get to the point where a decision about preemption is
made el1_interrupt() just returns).

> > Clearing and setting ALLINT in arch_local_irq_enable()/disable()
> > seems to solve the issue (now I moved on to debugging something
> > else, will post the outcome here because this fix does not seem
> > to fix the issue completely or I am hitting another bug).
> 
> Do you have any specifics on how you're seeing problems?  You did
> mention boot stalls offline but I've not been able to to reproduce this
> locally in a way that I can identify (based on your mail now I've made
> sure I've got preemption enabled).

defconfig, barebone rootfs, boot stalls (because we are scheduling with
IRQs off and there is nothing clearing ALLINT in the preemption path
so system hangs).

I don't know why you can't reproduce it don't know if it is the Kconfig
or file system configuration (or the FVP params - for this to show up
FEAT_NMI must obviously be enabled - I am testing the branch Marc posted
so that I can test the vGIC patches but this is definitely not a vGIC
bug).

Thanks,
Lorenzo

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
  2022-12-13  8:37         ` Lorenzo Pieralisi
@ 2022-12-13 13:15           ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-13 13:15 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm


[-- Attachment #1.1: Type: text/plain, Size: 1875 bytes --]

On Tue, Dec 13, 2022 at 09:37:56AM +0100, Lorenzo Pieralisi wrote:
> On Mon, Dec 12, 2022 at 02:03:33PM +0000, Mark Brown wrote:

> > A first pass suggests that we should be handling this like we do for
> > other preemptions and returning early from arm64_preempt_schedule_irq()
> > if ALLINT is masked.  If we are handling a regular IRQ then ALLINT will
> > be unmasked and we'll call into preempt_schedule_irq(), if we're
> > handling a NMI then ALLINT will still be masked so we don't attempt to
> > schedule.  I've pushed out a change which does this but not yet properly
> > tested it.

> Yes that's what should happen (actually if we are handling an NMI we
> should not even get to the point where a decision about preemption is
> made el1_interrupt() just returns).

OK, great.  It would be good to understand where the preemption is
happening, I suspect you're hitting it from some place I'm not.  I did
verify that I'm seeing preemptions during boot, it just wasn't stalling
for me.

> > Do you have any specifics on how you're seeing problems?  You did
> > mention boot stalls offline but I've not been able to to reproduce this
> > locally in a way that I can identify (based on your mail now I've made
> > sure I've got preemption enabled).

> defconfig, barebone rootfs, boot stalls (because we are scheduling with
> IRQs off and there is nothing clearing ALLINT in the preemption path
> so system hangs).

> I don't know why you can't reproduce it don't know if it is the Kconfig
> or file system configuration (or the FVP params - for this to show up
> FEAT_NMI must obviously be enabled - I am testing the branch Marc posted
> so that I can test the vGIC patches but this is definitely not a vGIC
> bug).

It might be Marc's changes I guess, I didn't pull them in but I don't
see anything there that should be doing anything without running a guest
either...

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
@ 2022-12-13 13:15           ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-13 13:15 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

[-- Attachment #1: Type: text/plain, Size: 1875 bytes --]

On Tue, Dec 13, 2022 at 09:37:56AM +0100, Lorenzo Pieralisi wrote:
> On Mon, Dec 12, 2022 at 02:03:33PM +0000, Mark Brown wrote:

> > A first pass suggests that we should be handling this like we do for
> > other preemptions and returning early from arm64_preempt_schedule_irq()
> > if ALLINT is masked.  If we are handling a regular IRQ then ALLINT will
> > be unmasked and we'll call into preempt_schedule_irq(), if we're
> > handling a NMI then ALLINT will still be masked so we don't attempt to
> > schedule.  I've pushed out a change which does this but not yet properly
> > tested it.

> Yes that's what should happen (actually if we are handling an NMI we
> should not even get to the point where a decision about preemption is
> made el1_interrupt() just returns).

OK, great.  It would be good to understand where the preemption is
happening, I suspect you're hitting it from some place I'm not.  I did
verify that I'm seeing preemptions during boot, it just wasn't stalling
for me.

> > Do you have any specifics on how you're seeing problems?  You did
> > mention boot stalls offline but I've not been able to to reproduce this
> > locally in a way that I can identify (based on your mail now I've made
> > sure I've got preemption enabled).

> defconfig, barebone rootfs, boot stalls (because we are scheduling with
> IRQs off and there is nothing clearing ALLINT in the preemption path
> so system hangs).

> I don't know why you can't reproduce it don't know if it is the Kconfig
> or file system configuration (or the FVP params - for this to show up
> FEAT_NMI must obviously be enabled - I am testing the branch Marc posted
> so that I can test the vGIC patches but this is definitely not a vGIC
> bug).

It might be Marc's changes I guess, I didn't pull them in but I don't
see anything there that should be doing anything without running a guest
either...

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
  2022-12-12 14:40     ` Mark Rutland
@ 2022-12-15 13:21       ` Mark Brown
  -1 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-15 13:21 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Lorenzo Pieralisi,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm


[-- Attachment #1.1: Type: text/plain, Size: 832 bytes --]

On Mon, Dec 12, 2022 at 02:40:08PM +0000, Mark Rutland wrote:

> Please, no. NAK to pretending this is part of DAIF.

> The existing hacks to bodge pseudo-NMI into the DAIF management code are
> convoluted, difficult to maintain, and they have known cases where they
> *cannot* do the right thing. Those existing hacks have proved to be more
> trouble than they're worth, and continuing down that path makes things worse.

As discussed elsehwere I do agree that the current "DAIF is an
abstraction" approach isn't great and cleanup is needed but have a hard
time seeing these changes as making things appreciably worse than they
already are with pseudo NMI.  In any case there was some demand for the
patches to be out there, and for collaboration on the GIC parts so I'll
keep posting the patches in parallel with the refactoring.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
@ 2022-12-15 13:21       ` Mark Brown
  0 siblings, 0 replies; 96+ messages in thread
From: Mark Brown @ 2022-12-15 13:21 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Lorenzo Pieralisi,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

[-- Attachment #1: Type: text/plain, Size: 832 bytes --]

On Mon, Dec 12, 2022 at 02:40:08PM +0000, Mark Rutland wrote:

> Please, no. NAK to pretending this is part of DAIF.

> The existing hacks to bodge pseudo-NMI into the DAIF management code are
> convoluted, difficult to maintain, and they have known cases where they
> *cannot* do the right thing. Those existing hacks have proved to be more
> trouble than they're worth, and continuing down that path makes things worse.

As discussed elsehwere I do agree that the current "DAIF is an
abstraction" approach isn't great and cleanup is needed but have a hard
time seeing these changes as making things appreciably worse than they
already are with pseudo NMI.  In any case there was some demand for the
patches to be out there, and for collaboration on the GIC parts so I'll
keep posting the patches in parallel with the refactoring.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
  2022-12-13 13:15           ` Mark Brown
@ 2022-12-15 13:32             ` Marc Zyngier
  -1 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-15 13:32 UTC (permalink / raw)
  To: Mark Brown
  Cc: Lorenzo Pieralisi, Catalin Marinas, Will Deacon, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On 2022-12-13 13:15, Mark Brown wrote:
> On Tue, Dec 13, 2022 at 09:37:56AM +0100, Lorenzo Pieralisi wrote:
>> On Mon, Dec 12, 2022 at 02:03:33PM +0000, Mark Brown wrote:
> 
> It might be Marc's changes I guess, I didn't pull them in but I don't
> see anything there that should be doing anything without running a 
> guest
> either...

Quite. And even with that, the virtual state is, by definition,
isolated from the the physical state.

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF
@ 2022-12-15 13:32             ` Marc Zyngier
  0 siblings, 0 replies; 96+ messages in thread
From: Marc Zyngier @ 2022-12-15 13:32 UTC (permalink / raw)
  To: Mark Brown
  Cc: Lorenzo Pieralisi, Catalin Marinas, Will Deacon, Mark Rutland,
	Sami Mujawar, Thomas Gleixner, linux-arm-kernel, kvmarm

On 2022-12-13 13:15, Mark Brown wrote:
> On Tue, Dec 13, 2022 at 09:37:56AM +0100, Lorenzo Pieralisi wrote:
>> On Mon, Dec 12, 2022 at 02:03:33PM +0000, Mark Brown wrote:
> 
> It might be Marc's changes I guess, I didn't pull them in but I don't
> see anything there that should be doing anything without running a 
> guest
> either...

Quite. And even with that, the virtual state is, by definition,
isolated from the the physical state.

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 96+ messages in thread

end of thread, other threads:[~2022-12-15 13:33 UTC | newest]

Thread overview: 96+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-12 15:16 [PATCH v2 00/14] arm64/nmi: Support for FEAT_NMI Mark Brown
2022-11-12 15:16 ` Mark Brown
2022-11-12 15:16 ` [PATCH v2 01/14] arm64/booting: Document boot requirements " Mark Brown
2022-11-12 15:16   ` Mark Brown
2022-11-12 15:16 ` [PATCH v2 02/14] arm64/sysreg: Add definition for ICC_NMIAR1_EL1 Mark Brown
2022-11-12 15:16   ` Mark Brown
2022-11-12 15:16 ` [PATCH v2 03/14] arm64/sysreg: Add definition of ISR_EL1 Mark Brown
2022-11-12 15:16   ` Mark Brown
2022-12-05 16:45   ` Marc Zyngier
2022-12-05 16:45     ` Marc Zyngier
2022-11-12 15:16 ` [PATCH v2 04/14] arm64/sysreg: Add definitions for immediate versions of MSR ALLINT Mark Brown
2022-11-12 15:16   ` Mark Brown
2022-12-05 16:38   ` Marc Zyngier
2022-12-05 16:38     ` Marc Zyngier
2022-12-05 17:11     ` Mark Brown
2022-12-05 17:11       ` Mark Brown
2022-12-07 19:18       ` Marc Zyngier
2022-12-07 19:18         ` Marc Zyngier
2022-12-07 19:42         ` Mark Brown
2022-12-07 19:42           ` Mark Brown
2022-11-12 15:16 ` [PATCH v2 05/14] arm64/asm: Introduce assembly macros for managing ALLINT Mark Brown
2022-11-12 15:16   ` Mark Brown
2022-12-05 17:29   ` Marc Zyngier
2022-12-05 17:29     ` Marc Zyngier
2022-12-05 18:24     ` Mark Brown
2022-12-05 18:24       ` Mark Brown
2022-12-07 19:14       ` Marc Zyngier
2022-12-07 19:14         ` Marc Zyngier
2022-11-12 15:17 ` [PATCH v2 06/14] arm64/hyp-stub: Enable access to ALLINT Mark Brown
2022-11-12 15:17   ` Mark Brown
2022-12-05 17:50   ` Marc Zyngier
2022-12-05 17:50     ` Marc Zyngier
2022-11-12 15:17 ` [PATCH v2 07/14] arm64/idreg: Add an override for FEAT_NMI Mark Brown
2022-11-12 15:17   ` Mark Brown
2022-11-12 15:17 ` [PATCH v2 08/14] arm64/cpufeature: Detect PE support " Mark Brown
2022-11-12 15:17   ` Mark Brown
2022-12-05 18:03   ` Marc Zyngier
2022-12-05 18:03     ` Marc Zyngier
2022-12-05 19:32     ` Mark Brown
2022-12-05 19:32       ` Mark Brown
2022-12-07 19:06       ` Marc Zyngier
2022-12-07 19:06         ` Marc Zyngier
2022-11-12 15:17 ` [PATCH v2 09/14] KVM: arm64: Hide FEAT_NMI from guests Mark Brown
2022-11-12 15:17   ` Mark Brown
2022-12-05 18:06   ` Marc Zyngier
2022-12-05 18:06     ` Marc Zyngier
2022-12-05 19:03     ` Mark Brown
2022-12-05 19:03       ` Mark Brown
2022-12-07 19:03       ` Marc Zyngier
2022-12-07 19:03         ` Marc Zyngier
2022-12-07 19:33         ` Mark Brown
2022-12-07 19:33           ` Mark Brown
2022-11-12 15:17 ` [PATCH v2 10/14] arm64/nmi: Manage masking for superpriority interrupts along with DAIF Mark Brown
2022-11-12 15:17   ` Mark Brown
2022-12-05 18:47   ` Marc Zyngier
2022-12-05 18:47     ` Marc Zyngier
2022-12-05 20:52     ` Mark Brown
2022-12-05 20:52       ` Mark Brown
2022-12-08 17:19   ` Lorenzo Pieralisi
2022-12-08 17:19     ` Lorenzo Pieralisi
2022-12-12 14:03     ` Mark Brown
2022-12-12 14:03       ` Mark Brown
2022-12-13  8:37       ` Lorenzo Pieralisi
2022-12-13  8:37         ` Lorenzo Pieralisi
2022-12-13 13:15         ` Mark Brown
2022-12-13 13:15           ` Mark Brown
2022-12-15 13:32           ` Marc Zyngier
2022-12-15 13:32             ` Marc Zyngier
2022-12-12 14:40   ` Mark Rutland
2022-12-12 14:40     ` Mark Rutland
2022-12-15 13:21     ` Mark Brown
2022-12-15 13:21       ` Mark Brown
2022-11-12 15:17 ` [PATCH v2 11/14] arm64/irq: Document handling of FEAT_NMI in irqflags.h Mark Brown
2022-11-12 15:17   ` Mark Brown
2022-11-12 15:17 ` [PATCH v2 12/14] arm64/nmi: Add handling of superpriority interrupts as NMIs Mark Brown
2022-11-12 15:17   ` Mark Brown
2022-12-07 11:03   ` Marc Zyngier
2022-12-07 11:03     ` Marc Zyngier
2022-12-07 13:24     ` Mark Brown
2022-12-07 13:24       ` Mark Brown
2022-12-07 18:57       ` Marc Zyngier
2022-12-07 18:57         ` Marc Zyngier
2022-12-07 19:15         ` Mark Brown
2022-12-07 19:15           ` Mark Brown
2022-11-12 15:17 ` [PATCH v2 13/14] arm64/nmi: Add Kconfig for NMI Mark Brown
2022-11-12 15:17   ` Mark Brown
2022-11-12 15:17 ` [PATCH v2 14/14] irqchip/gic-v3: Implement FEAT_GICv3_NMI support Mark Brown
2022-11-12 15:17   ` Mark Brown
2022-12-07 15:20   ` Marc Zyngier
2022-12-07 15:20     ` Marc Zyngier
2022-12-02 18:42 ` [PATCH v2 00/14] arm64/nmi: Support for FEAT_NMI Marc Zyngier
2022-12-02 18:42   ` Marc Zyngier
2022-12-03  8:25   ` Lorenzo Pieralisi
2022-12-03  8:25     ` Lorenzo Pieralisi
2022-12-03  9:45     ` Marc Zyngier
2022-12-03  9:45       ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.