All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/25] KVM/arm64: VM configuration enforcement
@ 2024-01-22 20:18 ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Until recently, implementing a new feature in KVM was a matter of
looking at the CPU capability, and enabling the feature if it was
present. Very little effort was put into producing a configuration
that was independent of the host.

That was until we made the ID registers writable. Now, userspace can
configure a VM with an expected set of features, and relies on the
hypervisor to provide the corresponding architectural behaviour
(within the limits of what the architecture can actually enforce, more
on that later). Unfortunately, this last point has so far been left

Enforcing these constraints is important:

- VMs need to migrate between hosts that implement different feature
  sets. Allowing the guest to crate state on one system, and then to
  crash on another, is not an acceptable outcome. Yes, the SW was
  buggy, but it is the migration that broke it, and the hypervisor is
  at fault. Not handling this correctly also has the potential to
  create covert channels, something that I'm sure cloud vendors are
  really eager to put in production.

- Making sure that an unadvertised feature UNDEFs is a way to limit
  unexpected behaviours in an unsuspecting hypervisor. Specially when
  the architecture is as hostile to SW as this one.

- Honouring the VM configuration allows the KVM support to be phased,
  unless we want to mandate full NV support for everything right now.

This patch series aims to plug a number of the existing holes by
providing some basic mechanisms:

(1) add a way to easily parse the guest's ID registers and work out
    whether a particular level of support is advertised. This is
    provided as a small set of basic accessors that simply return a
    boolean indicating whether the supporting condition is satisfied.

(2) offer a way to enforce the effective values of the system
    registers backed by the VNCR page, even if the guest can write
    whatever it sees fit there. This is done by maintaining a set of
    RES0/RES1 masks computed from the guest's feature set, and these
    masks applied on each access to the in-memory register.

(3) provide a way to force a system register to UNDEF if there is a
    FGT associated with it. These so-called 'Fine-Grained UNDEF' bits
    shadow the FGT and are used to route the resulting exception.

Under the hood, this results in a number of significant changes: the
NV trap handling xarray becomes the basic data structure for handling
traps, even in non-NV configurations, and there is a number of new
data structures to manage the effective state. Oh, and we get some
more debugfs crap (which I'm not necessarily keen on merging).

A lot of effort has been put into computing the configuration only
once and rely on shadow data to enforce the desired behaviour.

Note that not all the possible configurations are handled:

- the surface is pretty large, and I have only tried to come up with
  significant *examples* of how things could be done. There is tons of
  additional work to be done beyond these patches.

- the architecture doesn't always allow some features to be completely
  hidden (any register backed by VNCR is always 'visible' if present
  on the host).

I expect that /some/ of this work could make it into 6.9 and be used
by any new feature added to KVM from that point onwards.

This series is based on v6.8-rc1 plus my E2H0 series, and is a prefix
for the rest of the NV series.

Marc Zyngier (25):
  arm64: sysreg: Add missing ID_AA64ISAR[13]_EL1 fields and variants
  KVM: arm64: Add feature checking helpers
  KVM: arm64: nv: Add sanitising to VNCR-backed sysregs
  KVM: arm64: nv: Add sanitising to EL2 configuration registers
  KVM: arm64: nv: Add sanitising to VNCR-backed FGT sysregs
  KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2
  KVM: arm64: nv: Drop sanitised_sys_reg() helper
  KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers
  KVM: arm64: nv: Correctly handle negative polarity FGTs
  KVM: arm64: nv: Turn encoding ranges into discrete XArray stores
  KVM: arm64: Drop the requirement for XARRAY_MULTI
  KVM: arm64: nv: Move system instructions to their own sys_reg_desc
    array
  KVM: arm64: Always populate the trap configuration xarray
  KVM: arm64: Register AArch64 system register entries with the sysreg
    xarray
  KVM: arm64: Use the xarray as the primary sysreg/sysinsn walker
  KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap()
  KVM: arm64: Add Fine-Grained UNDEF tracking information
  KVM: arm64: Propagate and handle Fine-Grained UNDEF bits
  KVM: arm64: Move existing feature disabling over to FGU infrastructure
  KVM: arm64: Streamline save/restore of HFG[RW]TR_EL2
  KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest
  KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is not advertised to the
    guest
  KVM: arm64: Make AMU sysreg UNDEF if FEAT_AMU is not advertised to the
    guest
  KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest
  KVM: arm64: Add debugfs file for guest's ID registers

 arch/arm64/include/asm/kvm_arm.h           |   4 +-
 arch/arm64/include/asm/kvm_host.h          | 110 +++++++++
 arch/arm64/include/asm/kvm_nested.h        |   1 -
 arch/arm64/kvm/Kconfig                     |   1 -
 arch/arm64/kvm/arm.c                       |   7 +
 arch/arm64/kvm/emulate-nested.c            | 209 ++++++++++++----
 arch/arm64/kvm/hyp/include/hyp/switch.h    | 130 +++++-----
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  15 +-
 arch/arm64/kvm/nested.c                    | 265 ++++++++++++++++++++-
 arch/arm64/kvm/sys_regs.c                  | 224 ++++++++++++++---
 arch/arm64/kvm/sys_regs.h                  |   2 +
 arch/arm64/tools/sysreg                    |   8 +-
 12 files changed, 824 insertions(+), 152 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 114+ messages in thread

* [PATCH 00/25] KVM/arm64: VM configuration enforcement
@ 2024-01-22 20:18 ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Until recently, implementing a new feature in KVM was a matter of
looking at the CPU capability, and enabling the feature if it was
present. Very little effort was put into producing a configuration
that was independent of the host.

That was until we made the ID registers writable. Now, userspace can
configure a VM with an expected set of features, and relies on the
hypervisor to provide the corresponding architectural behaviour
(within the limits of what the architecture can actually enforce, more
on that later). Unfortunately, this last point has so far been left

Enforcing these constraints is important:

- VMs need to migrate between hosts that implement different feature
  sets. Allowing the guest to crate state on one system, and then to
  crash on another, is not an acceptable outcome. Yes, the SW was
  buggy, but it is the migration that broke it, and the hypervisor is
  at fault. Not handling this correctly also has the potential to
  create covert channels, something that I'm sure cloud vendors are
  really eager to put in production.

- Making sure that an unadvertised feature UNDEFs is a way to limit
  unexpected behaviours in an unsuspecting hypervisor. Specially when
  the architecture is as hostile to SW as this one.

- Honouring the VM configuration allows the KVM support to be phased,
  unless we want to mandate full NV support for everything right now.

This patch series aims to plug a number of the existing holes by
providing some basic mechanisms:

(1) add a way to easily parse the guest's ID registers and work out
    whether a particular level of support is advertised. This is
    provided as a small set of basic accessors that simply return a
    boolean indicating whether the supporting condition is satisfied.

(2) offer a way to enforce the effective values of the system
    registers backed by the VNCR page, even if the guest can write
    whatever it sees fit there. This is done by maintaining a set of
    RES0/RES1 masks computed from the guest's feature set, and these
    masks applied on each access to the in-memory register.

(3) provide a way to force a system register to UNDEF if there is a
    FGT associated with it. These so-called 'Fine-Grained UNDEF' bits
    shadow the FGT and are used to route the resulting exception.

Under the hood, this results in a number of significant changes: the
NV trap handling xarray becomes the basic data structure for handling
traps, even in non-NV configurations, and there is a number of new
data structures to manage the effective state. Oh, and we get some
more debugfs crap (which I'm not necessarily keen on merging).

A lot of effort has been put into computing the configuration only
once and rely on shadow data to enforce the desired behaviour.

Note that not all the possible configurations are handled:

- the surface is pretty large, and I have only tried to come up with
  significant *examples* of how things could be done. There is tons of
  additional work to be done beyond these patches.

- the architecture doesn't always allow some features to be completely
  hidden (any register backed by VNCR is always 'visible' if present
  on the host).

I expect that /some/ of this work could make it into 6.9 and be used
by any new feature added to KVM from that point onwards.

This series is based on v6.8-rc1 plus my E2H0 series, and is a prefix
for the rest of the NV series.

Marc Zyngier (25):
  arm64: sysreg: Add missing ID_AA64ISAR[13]_EL1 fields and variants
  KVM: arm64: Add feature checking helpers
  KVM: arm64: nv: Add sanitising to VNCR-backed sysregs
  KVM: arm64: nv: Add sanitising to EL2 configuration registers
  KVM: arm64: nv: Add sanitising to VNCR-backed FGT sysregs
  KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2
  KVM: arm64: nv: Drop sanitised_sys_reg() helper
  KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers
  KVM: arm64: nv: Correctly handle negative polarity FGTs
  KVM: arm64: nv: Turn encoding ranges into discrete XArray stores
  KVM: arm64: Drop the requirement for XARRAY_MULTI
  KVM: arm64: nv: Move system instructions to their own sys_reg_desc
    array
  KVM: arm64: Always populate the trap configuration xarray
  KVM: arm64: Register AArch64 system register entries with the sysreg
    xarray
  KVM: arm64: Use the xarray as the primary sysreg/sysinsn walker
  KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap()
  KVM: arm64: Add Fine-Grained UNDEF tracking information
  KVM: arm64: Propagate and handle Fine-Grained UNDEF bits
  KVM: arm64: Move existing feature disabling over to FGU infrastructure
  KVM: arm64: Streamline save/restore of HFG[RW]TR_EL2
  KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest
  KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is not advertised to the
    guest
  KVM: arm64: Make AMU sysreg UNDEF if FEAT_AMU is not advertised to the
    guest
  KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest
  KVM: arm64: Add debugfs file for guest's ID registers

 arch/arm64/include/asm/kvm_arm.h           |   4 +-
 arch/arm64/include/asm/kvm_host.h          | 110 +++++++++
 arch/arm64/include/asm/kvm_nested.h        |   1 -
 arch/arm64/kvm/Kconfig                     |   1 -
 arch/arm64/kvm/arm.c                       |   7 +
 arch/arm64/kvm/emulate-nested.c            | 209 ++++++++++++----
 arch/arm64/kvm/hyp/include/hyp/switch.h    | 130 +++++-----
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  15 +-
 arch/arm64/kvm/nested.c                    | 265 ++++++++++++++++++++-
 arch/arm64/kvm/sys_regs.c                  | 224 ++++++++++++++---
 arch/arm64/kvm/sys_regs.h                  |   2 +
 arch/arm64/tools/sysreg                    |   8 +-
 12 files changed, 824 insertions(+), 152 deletions(-)

-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* [PATCH 01/25] arm64: sysreg: Add missing ID_AA64ISAR[13]_EL1 fields and variants
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Despite having the control bits for FEAT_SPECRES and FEAT_PACM,
the ID registers fields are either incomplete or missing.

Fix it.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/tools/sysreg | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index fa3fe0856880..53daaaef46cb 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -1366,6 +1366,7 @@ EndEnum
 UnsignedEnum	43:40	SPECRES
 	0b0000	NI
 	0b0001	IMP
+	0b0010	COSP_RCTX
 EndEnum
 UnsignedEnum	39:36	SB
 	0b0000	NI
@@ -1492,7 +1493,12 @@ EndEnum
 EndSysreg
 
 Sysreg	ID_AA64ISAR3_EL1	3	0	0	6	3
-Res0	63:12
+Res0	63:16
+UnsignedEnum	15:12	PACM
+	0b0000	NI
+	0b0001	TRIVIAL_IMP
+	0b0010	FULL_IMP
+EndEnum
 UnsignedEnum	11:8	TLBIW
 	0b0000	NI
 	0b0001	IMP
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 01/25] arm64: sysreg: Add missing ID_AA64ISAR[13]_EL1 fields and variants
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Despite having the control bits for FEAT_SPECRES and FEAT_PACM,
the ID registers fields are either incomplete or missing.

Fix it.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/tools/sysreg | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index fa3fe0856880..53daaaef46cb 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -1366,6 +1366,7 @@ EndEnum
 UnsignedEnum	43:40	SPECRES
 	0b0000	NI
 	0b0001	IMP
+	0b0010	COSP_RCTX
 EndEnum
 UnsignedEnum	39:36	SB
 	0b0000	NI
@@ -1492,7 +1493,12 @@ EndEnum
 EndSysreg
 
 Sysreg	ID_AA64ISAR3_EL1	3	0	0	6	3
-Res0	63:12
+Res0	63:16
+UnsignedEnum	15:12	PACM
+	0b0000	NI
+	0b0001	TRIVIAL_IMP
+	0b0010	FULL_IMP
+EndEnum
 UnsignedEnum	11:8	TLBIW
 	0b0000	NI
 	0b0001	IMP
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 02/25] KVM: arm64: Add feature checking helpers
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

In order to make it easier to check whether a particular feature
is exposed to a guest, add a new set of helpers, with kvm_has_feat()
being the most useful.

Follow-up work will make heavy use of these.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 53 +++++++++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 21c57b812569..c0cf9c5f5e8d 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1233,4 +1233,57 @@ static inline void kvm_hyp_reserve(void) { }
 void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
 bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
 
+#define __expand_field_sign_unsigned(id, fld, val)			\
+	((u64)(id##_##fld##_##val))
+
+#define __expand_field_sign_signed(id, fld, val)			\
+	({								\
+		s64 __val = id##_##fld##_##val;				\
+		__val <<= 64 - id##_##fld##_WIDTH;			\
+		__val >>= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH;	\
+									\
+		__val;							\
+	})
+
+#define expand_field_sign(id, fld, val)					\
+	(id##_##fld##_SIGNED ?						\
+	 __expand_field_sign_signed(id, fld, val) :			\
+	 __expand_field_sign_unsigned(id, fld, val))
+
+#define get_idreg_field_unsigned(kvm, id, fld)				\
+	({								\
+		u64 __val = IDREG(kvm, SYS_##id);			\
+		__val &= id##_##fld##_MASK;				\
+		__val >>= id##_##fld##_SHIFT;				\
+									\
+		__val;							\
+	})
+
+#define get_idreg_field_signed(kvm, id, fld)				\
+	({								\
+		s64 __val = IDREG(kvm, SYS_##id);			\
+		__val <<= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH;	\
+		__val >>= id##_##fld##_SHIFT;				\
+									\
+		__val;							\
+	})
+
+#define get_idreg_field_enum(kvm, id, fld)				\
+	get_idreg_field_unsigned(kvm, id, fld)
+
+#define get_idreg_field(kvm, id, fld)					\
+	(id##_##fld##_SIGNED ?						\
+	 get_idreg_field_signed(kvm, id, fld) :				\
+	 get_idreg_field_unsigned(kvm, id, fld))
+
+#define kvm_has_feat(kvm, id, fld, limit)				\
+	(get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, limit))
+
+#define kvm_has_feat_enum(kvm, id, fld, limit)				\
+	(get_idreg_field_unsigned((kvm), id, fld) == id##_##fld##_##limit)
+
+#define kvm_has_feat_range(kvm, id, fld, min, max)			\
+	(get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, min) && \
+	 get_idreg_field((kvm), id, fld) <= expand_field_sign(id, fld, max))
+
 #endif /* __ARM64_KVM_HOST_H__ */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 02/25] KVM: arm64: Add feature checking helpers
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

In order to make it easier to check whether a particular feature
is exposed to a guest, add a new set of helpers, with kvm_has_feat()
being the most useful.

Follow-up work will make heavy use of these.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 53 +++++++++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 21c57b812569..c0cf9c5f5e8d 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1233,4 +1233,57 @@ static inline void kvm_hyp_reserve(void) { }
 void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
 bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
 
+#define __expand_field_sign_unsigned(id, fld, val)			\
+	((u64)(id##_##fld##_##val))
+
+#define __expand_field_sign_signed(id, fld, val)			\
+	({								\
+		s64 __val = id##_##fld##_##val;				\
+		__val <<= 64 - id##_##fld##_WIDTH;			\
+		__val >>= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH;	\
+									\
+		__val;							\
+	})
+
+#define expand_field_sign(id, fld, val)					\
+	(id##_##fld##_SIGNED ?						\
+	 __expand_field_sign_signed(id, fld, val) :			\
+	 __expand_field_sign_unsigned(id, fld, val))
+
+#define get_idreg_field_unsigned(kvm, id, fld)				\
+	({								\
+		u64 __val = IDREG(kvm, SYS_##id);			\
+		__val &= id##_##fld##_MASK;				\
+		__val >>= id##_##fld##_SHIFT;				\
+									\
+		__val;							\
+	})
+
+#define get_idreg_field_signed(kvm, id, fld)				\
+	({								\
+		s64 __val = IDREG(kvm, SYS_##id);			\
+		__val <<= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH;	\
+		__val >>= id##_##fld##_SHIFT;				\
+									\
+		__val;							\
+	})
+
+#define get_idreg_field_enum(kvm, id, fld)				\
+	get_idreg_field_unsigned(kvm, id, fld)
+
+#define get_idreg_field(kvm, id, fld)					\
+	(id##_##fld##_SIGNED ?						\
+	 get_idreg_field_signed(kvm, id, fld) :				\
+	 get_idreg_field_unsigned(kvm, id, fld))
+
+#define kvm_has_feat(kvm, id, fld, limit)				\
+	(get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, limit))
+
+#define kvm_has_feat_enum(kvm, id, fld, limit)				\
+	(get_idreg_field_unsigned((kvm), id, fld) == id##_##fld##_##limit)
+
+#define kvm_has_feat_range(kvm, id, fld, min, max)			\
+	(get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, min) && \
+	 get_idreg_field((kvm), id, fld) <= expand_field_sign(id, fld, max))
+
 #endif /* __ARM64_KVM_HOST_H__ */
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 03/25] KVM: arm64: nv: Add sanitising to VNCR-backed sysregs
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

VNCR-backed "registers" are actually only memory. Which means that
there is zero control over what the guest can write, and that it
is the hypervisor's job to actually sanitise the content of the
backing store. Yeah, this is fun.

In order to preserve some form of sanity, add a repainting mechanism
that makes use of a per-VM set of RES0/RES1 masks, one pair per VNCR
register. These masks get applied on access to the backing store via
__vcpu_sys_reg(), ensuring that the state that is consumed by KVM is
correct.

So far, nothing populates these masks, but stay tuned.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 25 +++++++++++++++++++
 arch/arm64/kvm/arm.c              |  1 +
 arch/arm64/kvm/nested.c           | 41 ++++++++++++++++++++++++++++++-
 3 files changed, 66 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index c0cf9c5f5e8d..fe35c59214ad 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -238,6 +238,8 @@ static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
 	return index;
 }
 
+struct kvm_sysreg_masks;
+
 struct kvm_arch {
 	struct kvm_s2_mmu mmu;
 
@@ -312,6 +314,9 @@ struct kvm_arch {
 #define KVM_ARM_ID_REG_NUM	(IDREG_IDX(sys_reg(3, 0, 0, 7, 7)) + 1)
 	u64 id_regs[KVM_ARM_ID_REG_NUM];
 
+	/* Masks for VNCR-baked sysregs */
+	struct kvm_sysreg_masks	*sysreg_masks;
+
 	/*
 	 * For an untrusted host VM, 'pkvm.handle' is used to lookup
 	 * the associated pKVM instance in the hypervisor.
@@ -474,6 +479,13 @@ enum vcpu_sysreg {
 	NR_SYS_REGS	/* Nothing after this line! */
 };
 
+struct kvm_sysreg_masks {
+	struct {
+		u64	res0;
+		u64	res1;
+	} mask[NR_SYS_REGS - __VNCR_START__];
+};
+
 struct kvm_cpu_context {
 	struct user_pt_regs regs;	/* sp = sp_el0 */
 
@@ -868,7 +880,20 @@ static inline u64 *__ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r)
 
 #define ctxt_sys_reg(c,r)	(*__ctxt_sys_reg(c,r))
 
+#if defined (__KVM_NVHE_HYPERVISOR__)
 #define __vcpu_sys_reg(v,r)	(ctxt_sys_reg(&(v)->arch.ctxt, (r)))
+#else
+u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *, enum vcpu_sysreg);
+#define __vcpu_sys_reg(v,r)						\
+	(*({								\
+		const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt;	\
+		u64 *__r = __ctxt_sys_reg(ctxt, (r));			\
+		if (unlikely(cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) && \
+			     r >= __VNCR_START__ && ctxt->vncr_array))	\
+			*__r = kvm_vcpu_sanitise_vncr_reg((v), (r));	\
+		__r;							\
+	}))
+#endif
 
 u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg);
 void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index a25265aca432..c063e84fc72c 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -206,6 +206,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 		pkvm_destroy_hyp_vm(kvm);
 
 	kfree(kvm->arch.mpidr_data);
+	kfree(kvm->arch.sysreg_masks);
 	kvm_destroy_vcpus(kvm);
 
 	kvm_unshare_hyp(kvm, kvm + 1);
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index d55e809e26cb..c976cd4b8379 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -163,15 +163,54 @@ static u64 limit_nv_id_reg(u32 id, u64 val)
 
 	return val;
 }
+
+u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
+{
+	u64 v = ctxt_sys_reg(&vcpu->arch.ctxt, sr);
+	struct kvm_sysreg_masks *masks;
+
+	masks = vcpu->kvm->arch.sysreg_masks;
+
+	if (masks) {
+		sr -= __VNCR_START__;
+
+		v &= ~masks->mask[sr].res0;
+		v |= masks->mask[sr].res1;
+	}
+
+	return v;
+}
+
+static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
+{
+	int i = sr - __VNCR_START__;
+
+	kvm->arch.sysreg_masks->mask[i].res0 = res0;
+	kvm->arch.sysreg_masks->mask[i].res1 = res1;
+}
+
 int kvm_init_nv_sysregs(struct kvm *kvm)
 {
+	int ret = 0;
+
 	mutex_lock(&kvm->arch.config_lock);
 
+	if (kvm->arch.sysreg_masks)
+		goto out;
+
+	kvm->arch.sysreg_masks = kzalloc(sizeof(*(kvm->arch.sysreg_masks)),
+					 GFP_KERNEL);
+	if (!kvm->arch.sysreg_masks) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
 	for (int i = 0; i < KVM_ARM_ID_REG_NUM; i++)
 		kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
 						       kvm->arch.id_regs[i]);
 
+out:
 	mutex_unlock(&kvm->arch.config_lock);
 
-	return 0;
+	return ret;
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 03/25] KVM: arm64: nv: Add sanitising to VNCR-backed sysregs
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

VNCR-backed "registers" are actually only memory. Which means that
there is zero control over what the guest can write, and that it
is the hypervisor's job to actually sanitise the content of the
backing store. Yeah, this is fun.

In order to preserve some form of sanity, add a repainting mechanism
that makes use of a per-VM set of RES0/RES1 masks, one pair per VNCR
register. These masks get applied on access to the backing store via
__vcpu_sys_reg(), ensuring that the state that is consumed by KVM is
correct.

So far, nothing populates these masks, but stay tuned.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 25 +++++++++++++++++++
 arch/arm64/kvm/arm.c              |  1 +
 arch/arm64/kvm/nested.c           | 41 ++++++++++++++++++++++++++++++-
 3 files changed, 66 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index c0cf9c5f5e8d..fe35c59214ad 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -238,6 +238,8 @@ static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
 	return index;
 }
 
+struct kvm_sysreg_masks;
+
 struct kvm_arch {
 	struct kvm_s2_mmu mmu;
 
@@ -312,6 +314,9 @@ struct kvm_arch {
 #define KVM_ARM_ID_REG_NUM	(IDREG_IDX(sys_reg(3, 0, 0, 7, 7)) + 1)
 	u64 id_regs[KVM_ARM_ID_REG_NUM];
 
+	/* Masks for VNCR-baked sysregs */
+	struct kvm_sysreg_masks	*sysreg_masks;
+
 	/*
 	 * For an untrusted host VM, 'pkvm.handle' is used to lookup
 	 * the associated pKVM instance in the hypervisor.
@@ -474,6 +479,13 @@ enum vcpu_sysreg {
 	NR_SYS_REGS	/* Nothing after this line! */
 };
 
+struct kvm_sysreg_masks {
+	struct {
+		u64	res0;
+		u64	res1;
+	} mask[NR_SYS_REGS - __VNCR_START__];
+};
+
 struct kvm_cpu_context {
 	struct user_pt_regs regs;	/* sp = sp_el0 */
 
@@ -868,7 +880,20 @@ static inline u64 *__ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r)
 
 #define ctxt_sys_reg(c,r)	(*__ctxt_sys_reg(c,r))
 
+#if defined (__KVM_NVHE_HYPERVISOR__)
 #define __vcpu_sys_reg(v,r)	(ctxt_sys_reg(&(v)->arch.ctxt, (r)))
+#else
+u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *, enum vcpu_sysreg);
+#define __vcpu_sys_reg(v,r)						\
+	(*({								\
+		const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt;	\
+		u64 *__r = __ctxt_sys_reg(ctxt, (r));			\
+		if (unlikely(cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) && \
+			     r >= __VNCR_START__ && ctxt->vncr_array))	\
+			*__r = kvm_vcpu_sanitise_vncr_reg((v), (r));	\
+		__r;							\
+	}))
+#endif
 
 u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg);
 void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index a25265aca432..c063e84fc72c 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -206,6 +206,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 		pkvm_destroy_hyp_vm(kvm);
 
 	kfree(kvm->arch.mpidr_data);
+	kfree(kvm->arch.sysreg_masks);
 	kvm_destroy_vcpus(kvm);
 
 	kvm_unshare_hyp(kvm, kvm + 1);
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index d55e809e26cb..c976cd4b8379 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -163,15 +163,54 @@ static u64 limit_nv_id_reg(u32 id, u64 val)
 
 	return val;
 }
+
+u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
+{
+	u64 v = ctxt_sys_reg(&vcpu->arch.ctxt, sr);
+	struct kvm_sysreg_masks *masks;
+
+	masks = vcpu->kvm->arch.sysreg_masks;
+
+	if (masks) {
+		sr -= __VNCR_START__;
+
+		v &= ~masks->mask[sr].res0;
+		v |= masks->mask[sr].res1;
+	}
+
+	return v;
+}
+
+static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
+{
+	int i = sr - __VNCR_START__;
+
+	kvm->arch.sysreg_masks->mask[i].res0 = res0;
+	kvm->arch.sysreg_masks->mask[i].res1 = res1;
+}
+
 int kvm_init_nv_sysregs(struct kvm *kvm)
 {
+	int ret = 0;
+
 	mutex_lock(&kvm->arch.config_lock);
 
+	if (kvm->arch.sysreg_masks)
+		goto out;
+
+	kvm->arch.sysreg_masks = kzalloc(sizeof(*(kvm->arch.sysreg_masks)),
+					 GFP_KERNEL);
+	if (!kvm->arch.sysreg_masks) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
 	for (int i = 0; i < KVM_ARM_ID_REG_NUM; i++)
 		kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
 						       kvm->arch.id_regs[i]);
 
+out:
 	mutex_unlock(&kvm->arch.config_lock);
 
-	return 0;
+	return ret;
 }
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

We can now start making use of our sanitising masks by setting them
to values that depend on the guest's configuration.

First up are VTTBR_EL2, VTCR_EL2, VMPIDR_EL2 and HCR_EL2.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/nested.c | 56 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 55 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index c976cd4b8379..ee461e630527 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -181,7 +181,7 @@ u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
 	return v;
 }
 
-static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
+static void set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
 {
 	int i = sr - __VNCR_START__;
 
@@ -191,6 +191,7 @@ static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u
 
 int kvm_init_nv_sysregs(struct kvm *kvm)
 {
+	u64 res0, res1;
 	int ret = 0;
 
 	mutex_lock(&kvm->arch.config_lock);
@@ -209,6 +210,59 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
 		kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
 						       kvm->arch.id_regs[i]);
 
+	/* VTTBR_EL2 */
+	res0 = res1 = 0;
+	if (!kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16))
+		res0 |= GENMASK(63, 56);
+	set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
+
+	/* VTCR_EL2 */
+	res0 = GENMASK(63, 32) | GENMASK(30, 20);
+	res1 = BIT(31);
+	set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
+
+	/* VMPIDR_EL2 */
+	res0 = GENMASK(63, 40) | GENMASK(30, 24);
+	res1 = BIT(31);
+	set_sysreg_masks(kvm, VMPIDR_EL2, res0, res1);
+
+	/* HCR_EL2 */
+	res0 = BIT(48);
+	res1 = HCR_RW;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, TWED, IMP))
+		res0 |= GENMASK(63, 59);
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, MTE, MTE2))
+		res0 |= (HCR_TID5 | HCR_DCT | HCR_ATA);
+	if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, TTLBxS))
+		res0 |= (HCR_TTLBIS | HCR_TTLBOS);
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
+	    !kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
+		res1 = HCR_ENSCXT;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, IMP))
+		res0 |= (HCR_TID4 | HCR_TICAB | HCR_TOCU);
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, V1P1))
+		res0 |= HCR_AMVOFFEN;
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, V1P1))
+		res0 |= HCR_FIEN;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, FWB, IMP))
+		res0 |= HCR_FWB;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, NV2))
+		res0 |= HCR_NV2;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, IMP))
+		res0 |= (HCR_AT | HCR_NV1 | HCR_NV);
+	if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
+	      __vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS)))
+		res0 |= (HCR_API | HCR_APK);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TME, IMP))
+		res0 |= BIT(39);
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
+		res0 |= (HCR_TERR | HCR_TEA);
+	if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
+		res0 |= HCR_TLOR;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR4_EL1, E2H0, IMP))
+		res1 |= HCR_E2H;
+	set_sysreg_masks(kvm, HCR_EL2, res0, res1);
+
 out:
 	mutex_unlock(&kvm->arch.config_lock);
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

We can now start making use of our sanitising masks by setting them
to values that depend on the guest's configuration.

First up are VTTBR_EL2, VTCR_EL2, VMPIDR_EL2 and HCR_EL2.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/nested.c | 56 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 55 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index c976cd4b8379..ee461e630527 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -181,7 +181,7 @@ u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
 	return v;
 }
 
-static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
+static void set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
 {
 	int i = sr - __VNCR_START__;
 
@@ -191,6 +191,7 @@ static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u
 
 int kvm_init_nv_sysregs(struct kvm *kvm)
 {
+	u64 res0, res1;
 	int ret = 0;
 
 	mutex_lock(&kvm->arch.config_lock);
@@ -209,6 +210,59 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
 		kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
 						       kvm->arch.id_regs[i]);
 
+	/* VTTBR_EL2 */
+	res0 = res1 = 0;
+	if (!kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16))
+		res0 |= GENMASK(63, 56);
+	set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
+
+	/* VTCR_EL2 */
+	res0 = GENMASK(63, 32) | GENMASK(30, 20);
+	res1 = BIT(31);
+	set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
+
+	/* VMPIDR_EL2 */
+	res0 = GENMASK(63, 40) | GENMASK(30, 24);
+	res1 = BIT(31);
+	set_sysreg_masks(kvm, VMPIDR_EL2, res0, res1);
+
+	/* HCR_EL2 */
+	res0 = BIT(48);
+	res1 = HCR_RW;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, TWED, IMP))
+		res0 |= GENMASK(63, 59);
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, MTE, MTE2))
+		res0 |= (HCR_TID5 | HCR_DCT | HCR_ATA);
+	if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, TTLBxS))
+		res0 |= (HCR_TTLBIS | HCR_TTLBOS);
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
+	    !kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
+		res1 = HCR_ENSCXT;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, IMP))
+		res0 |= (HCR_TID4 | HCR_TICAB | HCR_TOCU);
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, V1P1))
+		res0 |= HCR_AMVOFFEN;
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, V1P1))
+		res0 |= HCR_FIEN;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, FWB, IMP))
+		res0 |= HCR_FWB;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, NV2))
+		res0 |= HCR_NV2;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, IMP))
+		res0 |= (HCR_AT | HCR_NV1 | HCR_NV);
+	if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
+	      __vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS)))
+		res0 |= (HCR_API | HCR_APK);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TME, IMP))
+		res0 |= BIT(39);
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
+		res0 |= (HCR_TERR | HCR_TEA);
+	if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
+		res0 |= HCR_TLOR;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR4_EL1, E2H0, IMP))
+		res1 |= HCR_E2H;
+	set_sysreg_masks(kvm, HCR_EL2, res0, res1);
+
 out:
 	mutex_unlock(&kvm->arch.config_lock);
 
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 05/25] KVM: arm64: nv: Add sanitising to VNCR-backed FGT sysregs
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Fine Grained Traps are controlled by a whole bunch of features.
Each one of them must be checked and the corresponding masks
computed so that we don't let the guest apply traps it shouldn't
be using.

This takes care of HFGxTR_EL2, HDFGxTR_EL2, and HAFGRTR_EL2.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/nested.c | 128 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 128 insertions(+)

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index ee461e630527..cdeef3259193 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -263,6 +263,134 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
 		res1 |= HCR_E2H;
 	set_sysreg_masks(kvm, HCR_EL2, res0, res1);
 
+	/* HFG[RW]TR_EL2 */
+	res0 = res1 = 0;
+	if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
+	      __vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS)))
+		res0 |= (HFGxTR_EL2_APDAKey | HFGxTR_EL2_APDBKey |
+			 HFGxTR_EL2_APGAKey | HFGxTR_EL2_APIAKey |
+			 HFGxTR_EL2_APIBKey);
+	if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
+		res0 |= (HFGxTR_EL2_LORC_EL1 | HFGxTR_EL2_LOREA_EL1 |
+			 HFGxTR_EL2_LORID_EL1 | HFGxTR_EL2_LORN_EL1 |
+			 HFGxTR_EL2_LORSA_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
+	    !kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
+		res0 |= (HFGxTR_EL2_SCXTNUM_EL1 | HFGxTR_EL2_SCXTNUM_EL0);
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, GIC, IMP))
+		res0 |= HFGxTR_EL2_ICC_IGRPENn_EL1;
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
+		res0 |= (HFGxTR_EL2_ERRIDR_EL1 | HFGxTR_EL2_ERRSELR_EL1 |
+			 HFGxTR_EL2_ERXFR_EL1 | HFGxTR_EL2_ERXCTLR_EL1 |
+			 HFGxTR_EL2_ERXSTATUS_EL1 | HFGxTR_EL2_ERXMISCn_EL1 |
+			 HFGxTR_EL2_ERXPFGF_EL1 | HFGxTR_EL2_ERXPFGCTL_EL1 |
+			 HFGxTR_EL2_ERXPFGCDN_EL1 | HFGxTR_EL2_ERXADDR_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
+		res0 |= HFGxTR_EL2_nACCDATA_EL1;
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
+		res0 |= (HFGxTR_EL2_nGCS_EL0 | HFGxTR_EL2_nGCS_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP))
+		res0 |= (HFGxTR_EL2_nSMPRI_EL1 | HFGxTR_EL2_nTPIDR2_EL0);
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
+		res0 |= HFGxTR_EL2_nRCWMASK_EL1;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1PIE, IMP))
+		res0 |= (HFGxTR_EL2_nPIRE0_EL1 | HFGxTR_EL2_nPIR_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1POE, IMP))
+		res0 |= (HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nPOR_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S2POE, IMP))
+		res0 |= HFGxTR_EL2_nS2POR_EL1;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, AIE, IMP))
+		res0 |= (HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nAMAIR2_EL1);
+	set_sysreg_masks(kvm, HFGRTR_EL2, res0 | __HFGRTR_EL2_RES0, res1);
+	set_sysreg_masks(kvm, HFGWTR_EL2, res0 | __HFGWTR_EL2_RES0, res1);
+
+	/* HDFG[RW]TR_EL2 */
+	res0 = res1 = 0;
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, DoubleLock, IMP))
+		res0 |= HDFGRTR_EL2_OSDLR_EL1;
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMUVer, IMP))
+		res0 |= (HDFGRTR_EL2_PMEVCNTRn_EL0 | HDFGRTR_EL2_PMEVTYPERn_EL0 |
+			 HDFGRTR_EL2_PMCCFILTR_EL0 | HDFGRTR_EL2_PMCCNTR_EL0 |
+			 HDFGRTR_EL2_PMCNTEN | HDFGRTR_EL2_PMINTEN |
+			 HDFGRTR_EL2_PMOVS | HDFGRTR_EL2_PMSELR_EL0 |
+			 HDFGRTR_EL2_PMMIR_EL1 | HDFGRTR_EL2_PMUSERENR_EL0 |
+			 HDFGRTR_EL2_PMCEIDn_EL0);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, IMP))
+		res0 |= (HDFGRTR_EL2_PMBLIMITR_EL1 | HDFGRTR_EL2_PMBPTR_EL1 |
+			 HDFGRTR_EL2_PMBSR_EL1 | HDFGRTR_EL2_PMSCR_EL1 |
+			 HDFGRTR_EL2_PMSEVFR_EL1 | HDFGRTR_EL2_PMSFCR_EL1 |
+			 HDFGRTR_EL2_PMSICR_EL1 | HDFGRTR_EL2_PMSIDR_EL1 |
+			 HDFGRTR_EL2_PMSIRR_EL1 | HDFGRTR_EL2_PMSLATFR_EL1 |
+			 HDFGRTR_EL2_PMBIDR_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP))
+		res0 |= (HDFGRTR_EL2_TRC | HDFGRTR_EL2_TRCAUTHSTATUS |
+			 HDFGRTR_EL2_TRCAUXCTLR | HDFGRTR_EL2_TRCCLAIM |
+			 HDFGRTR_EL2_TRCCNTVRn | HDFGRTR_EL2_TRCID |
+			 HDFGRTR_EL2_TRCIMSPECn | HDFGRTR_EL2_TRCOSLSR |
+			 HDFGRTR_EL2_TRCPRGCTLR | HDFGRTR_EL2_TRCSEQSTR |
+			 HDFGRTR_EL2_TRCSSCSRn | HDFGRTR_EL2_TRCSTATR |
+			 HDFGRTR_EL2_TRCVICTLR);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceBuffer, IMP))
+		res0 |= (HDFGRTR_EL2_TRBBASER_EL1 | HDFGRTR_EL2_TRBIDR_EL1 |
+			 HDFGRTR_EL2_TRBLIMITR_EL1 | HDFGRTR_EL2_TRBMAR_EL1 |
+			 HDFGRTR_EL2_TRBPTR_EL1 | HDFGRTR_EL2_TRBSR_EL1 |
+			 HDFGRTR_EL2_TRBTRG_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, BRBE, IMP))
+		res0 |= (HDFGRTR_EL2_nBRBIDR | HDFGRTR_EL2_nBRBCTL |
+			 HDFGRTR_EL2_nBRBDATA);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, V1P2))
+		res0 |= HDFGRTR_EL2_nPMSNEVFR_EL1;
+	set_sysreg_masks(kvm, HDFGRTR_EL2, res0 | HDFGRTR_EL2_RES0, res1);
+
+	/* Reuse the bits from the read-side and add the write-specific stuff */
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMUVer, IMP))
+		res0 |= (HDFGWTR_EL2_PMCR_EL0 | HDFGWTR_EL2_PMSWINC_EL0);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP))
+		res0 |= HDFGWTR_EL2_TRCOSLAR;
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceFilt, IMP))
+		res0 |= HDFGWTR_EL2_TRFCR_EL1;
+	set_sysreg_masks(kvm, HFGWTR_EL2, res0 | HDFGWTR_EL2_RES0, res1);
+
+	/* HFGITR_EL2 */
+	res0 = HFGITR_EL2_RES0;
+	res1 = HFGITR_EL2_RES1;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, DPB, DPB2))
+		res0 |= HFGITR_EL2_DCCVADP;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, PAN, PAN2))
+		res0 |= (HFGITR_EL2_ATS1E1RP | HFGITR_EL2_ATS1E1WP);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
+		res0 |= (HFGITR_EL2_TLBIRVAALE1OS | HFGITR_EL2_TLBIRVALE1OS |
+			 HFGITR_EL2_TLBIRVAAE1OS | HFGITR_EL2_TLBIRVAE1OS |
+			 HFGITR_EL2_TLBIVAALE1OS | HFGITR_EL2_TLBIVALE1OS |
+			 HFGITR_EL2_TLBIVAAE1OS | HFGITR_EL2_TLBIASIDE1OS |
+			 HFGITR_EL2_TLBIVAE1OS | HFGITR_EL2_TLBIVMALLE1OS);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
+		res0 |= (HFGITR_EL2_TLBIRVAALE1 | HFGITR_EL2_TLBIRVALE1 |
+			 HFGITR_EL2_TLBIRVAAE1 | HFGITR_EL2_TLBIRVAE1 |
+			 HFGITR_EL2_TLBIRVAALE1IS | HFGITR_EL2_TLBIRVALE1IS |
+			 HFGITR_EL2_TLBIRVAAE1IS | HFGITR_EL2_TLBIRVAE1IS |
+			 HFGITR_EL2_TLBIRVAALE1OS | HFGITR_EL2_TLBIRVALE1OS |
+			 HFGITR_EL2_TLBIRVAAE1OS | HFGITR_EL2_TLBIRVAE1OS);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, SPECRES, IMP))
+		res0 |= (HFGITR_EL2_CFPRCTX | HFGITR_EL2_DVPRCTX |
+			 HFGITR_EL2_CPPRCTX);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, BRBE, IMP))
+		res0 |= (HFGITR_EL2_nBRBINJ | HFGITR_EL2_nBRBIALL);
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
+		res0 |= (HFGITR_EL2_nGCSPUSHM_EL1 | HFGITR_EL2_nGCSSTR_EL1 |
+			 HFGITR_EL2_nGCSEPP);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, SPECRES, COSP_RCTX))
+		res0 |= HFGITR_EL2_COSPRCTX;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, ATS1A, IMP))
+		res0 |= HFGITR_EL2_ATS1E1A;
+	set_sysreg_masks(kvm, HFGITR_EL2, res0, res1);
+
+	/* HAFGRTR_EL2 - not a lot to see here*/
+	res0 = HAFGRTR_EL2_RES0;
+	res1 = HAFGRTR_EL2_RES1;
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, V1P1))
+		res0 |= ~(res0 | res1);
+	set_sysreg_masks(kvm, HAFGRTR_EL2, res0, res1);
 out:
 	mutex_unlock(&kvm->arch.config_lock);
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 05/25] KVM: arm64: nv: Add sanitising to VNCR-backed FGT sysregs
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Fine Grained Traps are controlled by a whole bunch of features.
Each one of them must be checked and the corresponding masks
computed so that we don't let the guest apply traps it shouldn't
be using.

This takes care of HFGxTR_EL2, HDFGxTR_EL2, and HAFGRTR_EL2.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/nested.c | 128 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 128 insertions(+)

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index ee461e630527..cdeef3259193 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -263,6 +263,134 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
 		res1 |= HCR_E2H;
 	set_sysreg_masks(kvm, HCR_EL2, res0, res1);
 
+	/* HFG[RW]TR_EL2 */
+	res0 = res1 = 0;
+	if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
+	      __vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS)))
+		res0 |= (HFGxTR_EL2_APDAKey | HFGxTR_EL2_APDBKey |
+			 HFGxTR_EL2_APGAKey | HFGxTR_EL2_APIAKey |
+			 HFGxTR_EL2_APIBKey);
+	if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
+		res0 |= (HFGxTR_EL2_LORC_EL1 | HFGxTR_EL2_LOREA_EL1 |
+			 HFGxTR_EL2_LORID_EL1 | HFGxTR_EL2_LORN_EL1 |
+			 HFGxTR_EL2_LORSA_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
+	    !kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
+		res0 |= (HFGxTR_EL2_SCXTNUM_EL1 | HFGxTR_EL2_SCXTNUM_EL0);
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, GIC, IMP))
+		res0 |= HFGxTR_EL2_ICC_IGRPENn_EL1;
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
+		res0 |= (HFGxTR_EL2_ERRIDR_EL1 | HFGxTR_EL2_ERRSELR_EL1 |
+			 HFGxTR_EL2_ERXFR_EL1 | HFGxTR_EL2_ERXCTLR_EL1 |
+			 HFGxTR_EL2_ERXSTATUS_EL1 | HFGxTR_EL2_ERXMISCn_EL1 |
+			 HFGxTR_EL2_ERXPFGF_EL1 | HFGxTR_EL2_ERXPFGCTL_EL1 |
+			 HFGxTR_EL2_ERXPFGCDN_EL1 | HFGxTR_EL2_ERXADDR_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
+		res0 |= HFGxTR_EL2_nACCDATA_EL1;
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
+		res0 |= (HFGxTR_EL2_nGCS_EL0 | HFGxTR_EL2_nGCS_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP))
+		res0 |= (HFGxTR_EL2_nSMPRI_EL1 | HFGxTR_EL2_nTPIDR2_EL0);
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
+		res0 |= HFGxTR_EL2_nRCWMASK_EL1;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1PIE, IMP))
+		res0 |= (HFGxTR_EL2_nPIRE0_EL1 | HFGxTR_EL2_nPIR_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1POE, IMP))
+		res0 |= (HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nPOR_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S2POE, IMP))
+		res0 |= HFGxTR_EL2_nS2POR_EL1;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, AIE, IMP))
+		res0 |= (HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nAMAIR2_EL1);
+	set_sysreg_masks(kvm, HFGRTR_EL2, res0 | __HFGRTR_EL2_RES0, res1);
+	set_sysreg_masks(kvm, HFGWTR_EL2, res0 | __HFGWTR_EL2_RES0, res1);
+
+	/* HDFG[RW]TR_EL2 */
+	res0 = res1 = 0;
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, DoubleLock, IMP))
+		res0 |= HDFGRTR_EL2_OSDLR_EL1;
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMUVer, IMP))
+		res0 |= (HDFGRTR_EL2_PMEVCNTRn_EL0 | HDFGRTR_EL2_PMEVTYPERn_EL0 |
+			 HDFGRTR_EL2_PMCCFILTR_EL0 | HDFGRTR_EL2_PMCCNTR_EL0 |
+			 HDFGRTR_EL2_PMCNTEN | HDFGRTR_EL2_PMINTEN |
+			 HDFGRTR_EL2_PMOVS | HDFGRTR_EL2_PMSELR_EL0 |
+			 HDFGRTR_EL2_PMMIR_EL1 | HDFGRTR_EL2_PMUSERENR_EL0 |
+			 HDFGRTR_EL2_PMCEIDn_EL0);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, IMP))
+		res0 |= (HDFGRTR_EL2_PMBLIMITR_EL1 | HDFGRTR_EL2_PMBPTR_EL1 |
+			 HDFGRTR_EL2_PMBSR_EL1 | HDFGRTR_EL2_PMSCR_EL1 |
+			 HDFGRTR_EL2_PMSEVFR_EL1 | HDFGRTR_EL2_PMSFCR_EL1 |
+			 HDFGRTR_EL2_PMSICR_EL1 | HDFGRTR_EL2_PMSIDR_EL1 |
+			 HDFGRTR_EL2_PMSIRR_EL1 | HDFGRTR_EL2_PMSLATFR_EL1 |
+			 HDFGRTR_EL2_PMBIDR_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP))
+		res0 |= (HDFGRTR_EL2_TRC | HDFGRTR_EL2_TRCAUTHSTATUS |
+			 HDFGRTR_EL2_TRCAUXCTLR | HDFGRTR_EL2_TRCCLAIM |
+			 HDFGRTR_EL2_TRCCNTVRn | HDFGRTR_EL2_TRCID |
+			 HDFGRTR_EL2_TRCIMSPECn | HDFGRTR_EL2_TRCOSLSR |
+			 HDFGRTR_EL2_TRCPRGCTLR | HDFGRTR_EL2_TRCSEQSTR |
+			 HDFGRTR_EL2_TRCSSCSRn | HDFGRTR_EL2_TRCSTATR |
+			 HDFGRTR_EL2_TRCVICTLR);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceBuffer, IMP))
+		res0 |= (HDFGRTR_EL2_TRBBASER_EL1 | HDFGRTR_EL2_TRBIDR_EL1 |
+			 HDFGRTR_EL2_TRBLIMITR_EL1 | HDFGRTR_EL2_TRBMAR_EL1 |
+			 HDFGRTR_EL2_TRBPTR_EL1 | HDFGRTR_EL2_TRBSR_EL1 |
+			 HDFGRTR_EL2_TRBTRG_EL1);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, BRBE, IMP))
+		res0 |= (HDFGRTR_EL2_nBRBIDR | HDFGRTR_EL2_nBRBCTL |
+			 HDFGRTR_EL2_nBRBDATA);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, V1P2))
+		res0 |= HDFGRTR_EL2_nPMSNEVFR_EL1;
+	set_sysreg_masks(kvm, HDFGRTR_EL2, res0 | HDFGRTR_EL2_RES0, res1);
+
+	/* Reuse the bits from the read-side and add the write-specific stuff */
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMUVer, IMP))
+		res0 |= (HDFGWTR_EL2_PMCR_EL0 | HDFGWTR_EL2_PMSWINC_EL0);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP))
+		res0 |= HDFGWTR_EL2_TRCOSLAR;
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceFilt, IMP))
+		res0 |= HDFGWTR_EL2_TRFCR_EL1;
+	set_sysreg_masks(kvm, HFGWTR_EL2, res0 | HDFGWTR_EL2_RES0, res1);
+
+	/* HFGITR_EL2 */
+	res0 = HFGITR_EL2_RES0;
+	res1 = HFGITR_EL2_RES1;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, DPB, DPB2))
+		res0 |= HFGITR_EL2_DCCVADP;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, PAN, PAN2))
+		res0 |= (HFGITR_EL2_ATS1E1RP | HFGITR_EL2_ATS1E1WP);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
+		res0 |= (HFGITR_EL2_TLBIRVAALE1OS | HFGITR_EL2_TLBIRVALE1OS |
+			 HFGITR_EL2_TLBIRVAAE1OS | HFGITR_EL2_TLBIRVAE1OS |
+			 HFGITR_EL2_TLBIVAALE1OS | HFGITR_EL2_TLBIVALE1OS |
+			 HFGITR_EL2_TLBIVAAE1OS | HFGITR_EL2_TLBIASIDE1OS |
+			 HFGITR_EL2_TLBIVAE1OS | HFGITR_EL2_TLBIVMALLE1OS);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
+		res0 |= (HFGITR_EL2_TLBIRVAALE1 | HFGITR_EL2_TLBIRVALE1 |
+			 HFGITR_EL2_TLBIRVAAE1 | HFGITR_EL2_TLBIRVAE1 |
+			 HFGITR_EL2_TLBIRVAALE1IS | HFGITR_EL2_TLBIRVALE1IS |
+			 HFGITR_EL2_TLBIRVAAE1IS | HFGITR_EL2_TLBIRVAE1IS |
+			 HFGITR_EL2_TLBIRVAALE1OS | HFGITR_EL2_TLBIRVALE1OS |
+			 HFGITR_EL2_TLBIRVAAE1OS | HFGITR_EL2_TLBIRVAE1OS);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, SPECRES, IMP))
+		res0 |= (HFGITR_EL2_CFPRCTX | HFGITR_EL2_DVPRCTX |
+			 HFGITR_EL2_CPPRCTX);
+	if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, BRBE, IMP))
+		res0 |= (HFGITR_EL2_nBRBINJ | HFGITR_EL2_nBRBIALL);
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
+		res0 |= (HFGITR_EL2_nGCSPUSHM_EL1 | HFGITR_EL2_nGCSSTR_EL1 |
+			 HFGITR_EL2_nGCSEPP);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, SPECRES, COSP_RCTX))
+		res0 |= HFGITR_EL2_COSPRCTX;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, ATS1A, IMP))
+		res0 |= HFGITR_EL2_ATS1E1A;
+	set_sysreg_masks(kvm, HFGITR_EL2, res0, res1);
+
+	/* HAFGRTR_EL2 - not a lot to see here*/
+	res0 = HAFGRTR_EL2_RES0;
+	res1 = HAFGRTR_EL2_RES1;
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, V1P1))
+		res0 |= ~(res0 | res1);
+	set_sysreg_masks(kvm, HAFGRTR_EL2, res0, res1);
 out:
 	mutex_unlock(&kvm->arch.config_lock);
 
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Just like its little friends, HCRX_EL2 gets the feature set treatment
when backed by VNCR.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/nested.c | 42 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index cdeef3259193..72db632b115a 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -263,6 +263,48 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
 		res1 |= HCR_E2H;
 	set_sysreg_masks(kvm, HCR_EL2, res0, res1);
 
+	/* HCRX_EL2 */
+	res0 = HCRX_EL2_RES0;
+	res1 = HCRX_EL2_RES1;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR3_EL1, PACM, TRIVIAL_IMP))
+		res0 |= HCRX_EL2_PACMEn;
+	if (!kvm_has_feat(kvm, ID_AA64PFR2_EL1, FPMR, IMP))
+		res0 |= HCRX_EL2_EnFPM;
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
+		res0 |= HCRX_EL2_GCSEn;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, SYSREG_128, IMP))
+		res0 |= HCRX_EL2_EnIDCP128;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, ADERR, DEV_ASYNC))
+		res0 |= (HCRX_EL2_EnSDERR | HCRX_EL2_EnSNERR);
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, DF2, IMP))
+		res0 |= HCRX_EL2_TMEA;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, D128, IMP))
+		res0 |= HCRX_EL2_D128En;
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
+		res0 |= HCRX_EL2_PTTWI;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, SCTLRX, IMP))
+		res0 |= HCRX_EL2_SCTLR2En;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, TCRX, IMP))
+		res0 |= HCRX_EL2_TCR2En;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, MOPS, IMP))
+		res0 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
+	if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, CMOW, IMP))
+		res0 |= HCRX_EL2_CMOW;
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, NMI, IMP))
+		res0 |= (HCRX_EL2_VFNMI | HCRX_EL2_VINMI | HCRX_EL2_TALLINT);
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP) ||
+	    !(read_sysreg_s(SYS_SMIDR_EL1) & SMIDR_EL1_SMPS))
+		res0 |= HCRX_EL2_SMPME;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, XS, IMP))
+		res0 |= (HCRX_EL2_FGTnXS | HCRX_EL2_FnXS);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V))
+		res0 |= HCRX_EL2_EnASR;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64))
+		res0 |= HCRX_EL2_EnALS;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
+		res0 |= HCRX_EL2_EnAS0;
+	set_sysreg_masks(kvm, HCRX_EL2, res0, res1);
+
 	/* HFG[RW]TR_EL2 */
 	res0 = res1 = 0;
 	if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Just like its little friends, HCRX_EL2 gets the feature set treatment
when backed by VNCR.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/nested.c | 42 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index cdeef3259193..72db632b115a 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -263,6 +263,48 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
 		res1 |= HCR_E2H;
 	set_sysreg_masks(kvm, HCR_EL2, res0, res1);
 
+	/* HCRX_EL2 */
+	res0 = HCRX_EL2_RES0;
+	res1 = HCRX_EL2_RES1;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR3_EL1, PACM, TRIVIAL_IMP))
+		res0 |= HCRX_EL2_PACMEn;
+	if (!kvm_has_feat(kvm, ID_AA64PFR2_EL1, FPMR, IMP))
+		res0 |= HCRX_EL2_EnFPM;
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
+		res0 |= HCRX_EL2_GCSEn;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, SYSREG_128, IMP))
+		res0 |= HCRX_EL2_EnIDCP128;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, ADERR, DEV_ASYNC))
+		res0 |= (HCRX_EL2_EnSDERR | HCRX_EL2_EnSNERR);
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, DF2, IMP))
+		res0 |= HCRX_EL2_TMEA;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, D128, IMP))
+		res0 |= HCRX_EL2_D128En;
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
+		res0 |= HCRX_EL2_PTTWI;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, SCTLRX, IMP))
+		res0 |= HCRX_EL2_SCTLR2En;
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, TCRX, IMP))
+		res0 |= HCRX_EL2_TCR2En;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, MOPS, IMP))
+		res0 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
+	if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, CMOW, IMP))
+		res0 |= HCRX_EL2_CMOW;
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, NMI, IMP))
+		res0 |= (HCRX_EL2_VFNMI | HCRX_EL2_VINMI | HCRX_EL2_TALLINT);
+	if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP) ||
+	    !(read_sysreg_s(SYS_SMIDR_EL1) & SMIDR_EL1_SMPS))
+		res0 |= HCRX_EL2_SMPME;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, XS, IMP))
+		res0 |= (HCRX_EL2_FGTnXS | HCRX_EL2_FnXS);
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V))
+		res0 |= HCRX_EL2_EnASR;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64))
+		res0 |= HCRX_EL2_EnALS;
+	if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
+		res0 |= HCRX_EL2_EnAS0;
+	set_sysreg_masks(kvm, HCRX_EL2, res0, res1);
+
 	/* HFG[RW]TR_EL2 */
 	res0 = res1 = 0;
 	if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 07/25] KVM: arm64: nv: Drop sanitised_sys_reg() helper
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Now that we have the infrastructure to enforce a sanitised register
value depending on the VM configuration, drop the helper that only
used the architectural RES0 value.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 22 +++++++---------------
 1 file changed, 7 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 431fd429932d..7a4a886adb9d 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1897,14 +1897,6 @@ static bool check_fgt_bit(u64 val, const union trap_config tc)
 	return ((val >> tc.bit) & 1) == tc.pol;
 }
 
-#define sanitised_sys_reg(vcpu, reg)			\
-	({						\
-		u64 __val;				\
-		__val = __vcpu_sys_reg(vcpu, reg);	\
-		__val &= ~__ ## reg ## _RES0;		\
-		(__val);				\
-	})
-
 bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 {
 	union trap_config tc;
@@ -1940,25 +1932,25 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 
 	case HFGxTR_GROUP:
 		if (is_read)
-			val = sanitised_sys_reg(vcpu, HFGRTR_EL2);
+			val = __vcpu_sys_reg(vcpu, HFGRTR_EL2);
 		else
-			val = sanitised_sys_reg(vcpu, HFGWTR_EL2);
+			val = __vcpu_sys_reg(vcpu, HFGWTR_EL2);
 		break;
 
 	case HDFGRTR_GROUP:
 	case HDFGWTR_GROUP:
 		if (is_read)
-			val = sanitised_sys_reg(vcpu, HDFGRTR_EL2);
+			val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
 		else
-			val = sanitised_sys_reg(vcpu, HDFGWTR_EL2);
+			val = __vcpu_sys_reg(vcpu, HDFGWTR_EL2);
 		break;
 
 	case HAFGRTR_GROUP:
-		val = sanitised_sys_reg(vcpu, HAFGRTR_EL2);
+		val = __vcpu_sys_reg(vcpu, HAFGRTR_EL2);
 		break;
 
 	case HFGITR_GROUP:
-		val = sanitised_sys_reg(vcpu, HFGITR_EL2);
+		val = __vcpu_sys_reg(vcpu, HFGITR_EL2);
 		switch (tc.fgf) {
 			u64 tmp;
 
@@ -1966,7 +1958,7 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 			break;
 
 		case HCRX_FGTnXS:
-			tmp = sanitised_sys_reg(vcpu, HCRX_EL2);
+			tmp = __vcpu_sys_reg(vcpu, HCRX_EL2);
 			if (tmp & HCRX_EL2_FGTnXS)
 				tc.fgt = __NO_FGT_GROUP__;
 		}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 07/25] KVM: arm64: nv: Drop sanitised_sys_reg() helper
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Now that we have the infrastructure to enforce a sanitised register
value depending on the VM configuration, drop the helper that only
used the architectural RES0 value.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 22 +++++++---------------
 1 file changed, 7 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 431fd429932d..7a4a886adb9d 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1897,14 +1897,6 @@ static bool check_fgt_bit(u64 val, const union trap_config tc)
 	return ((val >> tc.bit) & 1) == tc.pol;
 }
 
-#define sanitised_sys_reg(vcpu, reg)			\
-	({						\
-		u64 __val;				\
-		__val = __vcpu_sys_reg(vcpu, reg);	\
-		__val &= ~__ ## reg ## _RES0;		\
-		(__val);				\
-	})
-
 bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 {
 	union trap_config tc;
@@ -1940,25 +1932,25 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 
 	case HFGxTR_GROUP:
 		if (is_read)
-			val = sanitised_sys_reg(vcpu, HFGRTR_EL2);
+			val = __vcpu_sys_reg(vcpu, HFGRTR_EL2);
 		else
-			val = sanitised_sys_reg(vcpu, HFGWTR_EL2);
+			val = __vcpu_sys_reg(vcpu, HFGWTR_EL2);
 		break;
 
 	case HDFGRTR_GROUP:
 	case HDFGWTR_GROUP:
 		if (is_read)
-			val = sanitised_sys_reg(vcpu, HDFGRTR_EL2);
+			val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
 		else
-			val = sanitised_sys_reg(vcpu, HDFGWTR_EL2);
+			val = __vcpu_sys_reg(vcpu, HDFGWTR_EL2);
 		break;
 
 	case HAFGRTR_GROUP:
-		val = sanitised_sys_reg(vcpu, HAFGRTR_EL2);
+		val = __vcpu_sys_reg(vcpu, HAFGRTR_EL2);
 		break;
 
 	case HFGITR_GROUP:
-		val = sanitised_sys_reg(vcpu, HFGITR_EL2);
+		val = __vcpu_sys_reg(vcpu, HFGITR_EL2);
 		switch (tc.fgf) {
 			u64 tmp;
 
@@ -1966,7 +1958,7 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 			break;
 
 		case HCRX_FGTnXS:
-			tmp = sanitised_sys_reg(vcpu, HCRX_EL2);
+			tmp = __vcpu_sys_reg(vcpu, HCRX_EL2);
 			if (tmp & HCRX_EL2_FGTnXS)
 				tc.fgt = __NO_FGT_GROUP__;
 		}
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 08/25] KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

There is no reason to have separate FGT group identifiers for
the debug fine grain trapping. The sole requirement is to provide
the *names* so that the SR_FGF() macro can do its magic of picking
the correct bit definition.

So let's alias HDFGWTR_GROUP and HDFGRTR_GROUP.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 7a4a886adb9d..8a1cfcf553a2 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1010,7 +1010,7 @@ enum fgt_group_id {
 	__NO_FGT_GROUP__,
 	HFGxTR_GROUP,
 	HDFGRTR_GROUP,
-	HDFGWTR_GROUP,
+	HDFGWTR_GROUP = HDFGRTR_GROUP,
 	HFGITR_GROUP,
 	HAFGRTR_GROUP,
 
@@ -1938,7 +1938,6 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 		break;
 
 	case HDFGRTR_GROUP:
-	case HDFGWTR_GROUP:
 		if (is_read)
 			val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
 		else
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 08/25] KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

There is no reason to have separate FGT group identifiers for
the debug fine grain trapping. The sole requirement is to provide
the *names* so that the SR_FGF() macro can do its magic of picking
the correct bit definition.

So let's alias HDFGWTR_GROUP and HDFGRTR_GROUP.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 7a4a886adb9d..8a1cfcf553a2 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1010,7 +1010,7 @@ enum fgt_group_id {
 	__NO_FGT_GROUP__,
 	HFGxTR_GROUP,
 	HDFGRTR_GROUP,
-	HDFGWTR_GROUP,
+	HDFGWTR_GROUP = HDFGRTR_GROUP,
 	HFGITR_GROUP,
 	HAFGRTR_GROUP,
 
@@ -1938,7 +1938,6 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 		break;
 
 	case HDFGRTR_GROUP:
-	case HDFGWTR_GROUP:
 		if (is_read)
 			val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
 		else
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 09/25] KVM: arm64: nv: Correctly handle negative polarity FGTs
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Negative trap bits are a massive pain. They are, on the surface,
indistinguishable from RES0 bits. Do you trap? or do you ignore?

Thankfully, we now have the right infrastructure to check for RES0
bits as long as the register is backed by VNCR, which is the case
for the FGT registers.

Use that information as a discriminant when handling a trap that
is potentially caused by a FGT.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 59 +++++++++++++++++++++++++++++++--
 1 file changed, 56 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 8a1cfcf553a2..ef46c2e45307 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1892,9 +1892,61 @@ static enum trap_behaviour compute_trap_behaviour(struct kvm_vcpu *vcpu,
 	return __compute_trap_behaviour(vcpu, tc.cgt, b);
 }
 
-static bool check_fgt_bit(u64 val, const union trap_config tc)
+static u64 kvm_get_sysreg_res0(struct kvm *kvm, enum vcpu_sysreg sr)
 {
-	return ((val >> tc.bit) & 1) == tc.pol;
+	struct kvm_sysreg_masks *masks;
+
+	/* Only handle the VNCR-backed regs for now */
+	if (sr < __VNCR_START__)
+		return 0;
+
+	masks = kvm->arch.sysreg_masks;
+
+	return masks->mask[sr - __VNCR_START__].res0;
+}
+
+static bool check_fgt_bit(struct kvm *kvm, bool is_read,
+			  u64 val, const union trap_config tc)
+{
+	enum vcpu_sysreg sr;
+
+	if (tc.pol)
+		return (val & BIT(tc.bit));
+
+	/*
+	 * FGTs with negative polarities are an absolute nightmare, as
+	 * we need to evaluate the bit in the light of the feature
+	 * that defines it. WTF were they thinking?
+	 *
+	 * So let's check if the bit has been earmarked as RES0, as
+	 * this indicates an unimplemented feature.
+	 */
+	if (val & BIT(tc.bit))
+		return false;
+
+	switch ((enum fgt_group_id)tc.fgt) {
+	case HFGxTR_GROUP:
+		sr = is_read ? HFGRTR_EL2 : HFGWTR_EL2;
+		break;
+
+	case HDFGRTR_GROUP:
+		sr = is_read ? HDFGRTR_EL2 : HDFGWTR_EL2;
+		break;
+
+	case HAFGRTR_GROUP:
+		sr = HAFGRTR_EL2;
+		break;
+
+	case HFGITR_GROUP:
+		sr = HFGITR_EL2;
+		break;
+
+	default:
+		WARN_ONCE(1, "Unhandled FGT group");
+		return false;
+	}
+
+	return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
 }
 
 bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
@@ -1969,7 +2021,8 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 		return false;
 	}
 
-	if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(val, tc))
+	if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(vcpu->kvm, is_read,
+							val, tc))
 		goto inject;
 
 	b = compute_trap_behaviour(vcpu, tc);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 09/25] KVM: arm64: nv: Correctly handle negative polarity FGTs
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Negative trap bits are a massive pain. They are, on the surface,
indistinguishable from RES0 bits. Do you trap? or do you ignore?

Thankfully, we now have the right infrastructure to check for RES0
bits as long as the register is backed by VNCR, which is the case
for the FGT registers.

Use that information as a discriminant when handling a trap that
is potentially caused by a FGT.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 59 +++++++++++++++++++++++++++++++--
 1 file changed, 56 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 8a1cfcf553a2..ef46c2e45307 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1892,9 +1892,61 @@ static enum trap_behaviour compute_trap_behaviour(struct kvm_vcpu *vcpu,
 	return __compute_trap_behaviour(vcpu, tc.cgt, b);
 }
 
-static bool check_fgt_bit(u64 val, const union trap_config tc)
+static u64 kvm_get_sysreg_res0(struct kvm *kvm, enum vcpu_sysreg sr)
 {
-	return ((val >> tc.bit) & 1) == tc.pol;
+	struct kvm_sysreg_masks *masks;
+
+	/* Only handle the VNCR-backed regs for now */
+	if (sr < __VNCR_START__)
+		return 0;
+
+	masks = kvm->arch.sysreg_masks;
+
+	return masks->mask[sr - __VNCR_START__].res0;
+}
+
+static bool check_fgt_bit(struct kvm *kvm, bool is_read,
+			  u64 val, const union trap_config tc)
+{
+	enum vcpu_sysreg sr;
+
+	if (tc.pol)
+		return (val & BIT(tc.bit));
+
+	/*
+	 * FGTs with negative polarities are an absolute nightmare, as
+	 * we need to evaluate the bit in the light of the feature
+	 * that defines it. WTF were they thinking?
+	 *
+	 * So let's check if the bit has been earmarked as RES0, as
+	 * this indicates an unimplemented feature.
+	 */
+	if (val & BIT(tc.bit))
+		return false;
+
+	switch ((enum fgt_group_id)tc.fgt) {
+	case HFGxTR_GROUP:
+		sr = is_read ? HFGRTR_EL2 : HFGWTR_EL2;
+		break;
+
+	case HDFGRTR_GROUP:
+		sr = is_read ? HDFGRTR_EL2 : HDFGWTR_EL2;
+		break;
+
+	case HAFGRTR_GROUP:
+		sr = HAFGRTR_EL2;
+		break;
+
+	case HFGITR_GROUP:
+		sr = HFGITR_EL2;
+		break;
+
+	default:
+		WARN_ONCE(1, "Unhandled FGT group");
+		return false;
+	}
+
+	return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
 }
 
 bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
@@ -1969,7 +2021,8 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 		return false;
 	}
 
-	if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(val, tc))
+	if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(vcpu->kvm, is_read,
+							val, tc))
 		goto inject;
 
 	b = compute_trap_behaviour(vcpu, tc);
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 10/25] KVM: arm64: nv: Turn encoding ranges into discrete XArray stores
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

In order to be able to store different values for member of an
encoding range, replace xa_store_range() calls with discrete
xa_store() calls and an encoding iterator.

We end-up using a bit more memory, but we gain some flexibility
that we will make use of shortly.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 31 ++++++++++++++++++++++++-------
 1 file changed, 24 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index ef46c2e45307..59622636b723 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1757,6 +1757,28 @@ static __init void print_nv_trap_error(const struct encoding_to_trap_config *tc,
 		err);
 }
 
+static u32 encoding_next(u32 encoding)
+{
+	u8 op0, op1, crn, crm, op2;
+
+	op0 = sys_reg_Op0(encoding);
+	op1 = sys_reg_Op1(encoding);
+	crn = sys_reg_CRn(encoding);
+	crm = sys_reg_CRm(encoding);
+	op2 = sys_reg_Op2(encoding);
+
+	if (op2 < Op2_mask)
+		return sys_reg(op0, op1, crn, crm, op2 + 1);
+	if (crm < CRm_mask)
+		return sys_reg(op0, op1, crn, crm + 1, 0);
+	if (crn < CRn_mask)
+		return sys_reg(op0, op1, crn + 1, 0, 0);
+	if (op1 < Op1_mask)
+		return sys_reg(op0, op1 + 1, 0, 0, 0);
+
+	return sys_reg(op0 + 1, 0, 0, 0, 0);
+}
+
 int __init populate_nv_trap_config(void)
 {
 	int ret = 0;
@@ -1775,13 +1797,8 @@ int __init populate_nv_trap_config(void)
 			ret = -EINVAL;
 		}
 
-		if (cgt->encoding != cgt->end) {
-			prev = xa_store_range(&sr_forward_xa,
-					      cgt->encoding, cgt->end,
-					      xa_mk_value(cgt->tc.val),
-					      GFP_KERNEL);
-		} else {
-			prev = xa_store(&sr_forward_xa, cgt->encoding,
+		for (u32 enc = cgt->encoding; enc <= cgt->end; enc = encoding_next(enc)) {
+			prev = xa_store(&sr_forward_xa, enc,
 					xa_mk_value(cgt->tc.val), GFP_KERNEL);
 			if (prev && !xa_is_err(prev)) {
 				ret = -EINVAL;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 10/25] KVM: arm64: nv: Turn encoding ranges into discrete XArray stores
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

In order to be able to store different values for member of an
encoding range, replace xa_store_range() calls with discrete
xa_store() calls and an encoding iterator.

We end-up using a bit more memory, but we gain some flexibility
that we will make use of shortly.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 31 ++++++++++++++++++++++++-------
 1 file changed, 24 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index ef46c2e45307..59622636b723 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1757,6 +1757,28 @@ static __init void print_nv_trap_error(const struct encoding_to_trap_config *tc,
 		err);
 }
 
+static u32 encoding_next(u32 encoding)
+{
+	u8 op0, op1, crn, crm, op2;
+
+	op0 = sys_reg_Op0(encoding);
+	op1 = sys_reg_Op1(encoding);
+	crn = sys_reg_CRn(encoding);
+	crm = sys_reg_CRm(encoding);
+	op2 = sys_reg_Op2(encoding);
+
+	if (op2 < Op2_mask)
+		return sys_reg(op0, op1, crn, crm, op2 + 1);
+	if (crm < CRm_mask)
+		return sys_reg(op0, op1, crn, crm + 1, 0);
+	if (crn < CRn_mask)
+		return sys_reg(op0, op1, crn + 1, 0, 0);
+	if (op1 < Op1_mask)
+		return sys_reg(op0, op1 + 1, 0, 0, 0);
+
+	return sys_reg(op0 + 1, 0, 0, 0, 0);
+}
+
 int __init populate_nv_trap_config(void)
 {
 	int ret = 0;
@@ -1775,13 +1797,8 @@ int __init populate_nv_trap_config(void)
 			ret = -EINVAL;
 		}
 
-		if (cgt->encoding != cgt->end) {
-			prev = xa_store_range(&sr_forward_xa,
-					      cgt->encoding, cgt->end,
-					      xa_mk_value(cgt->tc.val),
-					      GFP_KERNEL);
-		} else {
-			prev = xa_store(&sr_forward_xa, cgt->encoding,
+		for (u32 enc = cgt->encoding; enc <= cgt->end; enc = encoding_next(enc)) {
+			prev = xa_store(&sr_forward_xa, enc,
 					xa_mk_value(cgt->tc.val), GFP_KERNEL);
 			if (prev && !xa_is_err(prev)) {
 				ret = -EINVAL;
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 11/25] KVM: arm64: Drop the requirement for XARRAY_MULTI
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Now that we don't use xa_store_range() anymore, drop the added
complexity of XARRAY_MULTI for KVM. It is likely still pulled
in by other bits of the kernel though.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/Kconfig | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 6c3c8ca73e7f..5c2a672c06a8 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -39,7 +39,6 @@ menuconfig KVM
 	select HAVE_KVM_VCPU_RUN_PID_CHANGE
 	select SCHED_INFO
 	select GUEST_PERF_EVENTS if PERF_EVENTS
-	select XARRAY_MULTI
 	help
 	  Support hosting virtualized guest machines.
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 11/25] KVM: arm64: Drop the requirement for XARRAY_MULTI
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Now that we don't use xa_store_range() anymore, drop the added
complexity of XARRAY_MULTI for KVM. It is likely still pulled
in by other bits of the kernel though.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/Kconfig | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 6c3c8ca73e7f..5c2a672c06a8 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -39,7 +39,6 @@ menuconfig KVM
 	select HAVE_KVM_VCPU_RUN_PID_CHANGE
 	select SCHED_INFO
 	select GUEST_PERF_EVENTS if PERF_EVENTS
-	select XARRAY_MULTI
 	help
 	  Support hosting virtualized guest machines.
 
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 12/25] KVM: arm64: nv: Move system instructions to their own sys_reg_desc array
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

As NV results in a bunch of system instructions being trapped, it makes
sense to pull the system instructions into their own little array, where
they will eventually be joined by AT, TLBI and a bunch of other CMOs.

Based on an initial patch by Jintack Lim.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 59 +++++++++++++++++++++++++++++----------
 1 file changed, 44 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 041b11825578..501de653beb5 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2197,16 +2197,6 @@ static u64 reset_hcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
  * guest...
  */
 static const struct sys_reg_desc sys_reg_descs[] = {
-	{ SYS_DESC(SYS_DC_ISW), access_dcsw },
-	{ SYS_DESC(SYS_DC_IGSW), access_dcgsw },
-	{ SYS_DESC(SYS_DC_IGDSW), access_dcgsw },
-	{ SYS_DESC(SYS_DC_CSW), access_dcsw },
-	{ SYS_DESC(SYS_DC_CGSW), access_dcgsw },
-	{ SYS_DESC(SYS_DC_CGDSW), access_dcgsw },
-	{ SYS_DESC(SYS_DC_CISW), access_dcsw },
-	{ SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
-	{ SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
-
 	DBG_BCR_BVR_WCR_WVR_EL1(0),
 	DBG_BCR_BVR_WCR_WVR_EL1(1),
 	{ SYS_DESC(SYS_MDCCINT_EL1), trap_debug_regs, reset_val, MDCCINT_EL1, 0 },
@@ -2738,6 +2728,18 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	EL2_REG(SP_EL2, NULL, reset_unknown, 0),
 };
 
+static struct sys_reg_desc sys_insn_descs[] = {
+	{ SYS_DESC(SYS_DC_ISW), access_dcsw },
+	{ SYS_DESC(SYS_DC_IGSW), access_dcgsw },
+	{ SYS_DESC(SYS_DC_IGDSW), access_dcgsw },
+	{ SYS_DESC(SYS_DC_CSW), access_dcsw },
+	{ SYS_DESC(SYS_DC_CGSW), access_dcgsw },
+	{ SYS_DESC(SYS_DC_CGDSW), access_dcgsw },
+	{ SYS_DESC(SYS_DC_CISW), access_dcsw },
+	{ SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
+	{ SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
+};
+
 static const struct sys_reg_desc *first_idreg;
 
 static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
@@ -3431,6 +3433,24 @@ static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
 	return false;
 }
 
+static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
+{
+	const struct sys_reg_desc *r;
+
+	/* Search from the system instruction table. */
+	r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
+
+	if (likely(r)) {
+		perform_access(vcpu, p, r);
+	} else {
+		kvm_err("Unsupported guest sys instruction at: %lx\n",
+			*vcpu_pc(vcpu));
+		print_sys_reg_instr(p);
+		kvm_inject_undefined(vcpu);
+	}
+	return 1;
+}
+
 static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
 {
 	const struct sys_reg_desc *idreg = first_idreg;
@@ -3478,7 +3498,8 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
 }
 
 /**
- * kvm_handle_sys_reg -- handles a mrs/msr trap on a guest sys_reg access
+ * kvm_handle_sys_reg -- handles a system instruction or mrs/msr instruction
+ *			 trap on a guest execution
  * @vcpu: The VCPU pointer
  */
 int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
@@ -3495,12 +3516,19 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
 	params = esr_sys64_to_params(esr);
 	params.regval = vcpu_get_reg(vcpu, Rt);
 
-	if (!emulate_sys_reg(vcpu, &params))
+	/* System register? */
+	if (params.Op0 == 2 || params.Op0 == 3) {
+		if (!emulate_sys_reg(vcpu, &params))
+			return 1;
+
+		if (!params.is_write)
+			vcpu_set_reg(vcpu, Rt, params.regval);
+
 		return 1;
+	}
 
-	if (!params.is_write)
-		vcpu_set_reg(vcpu, Rt, params.regval);
-	return 1;
+	/* Hints, PSTATE (Op0 == 0) and System instructions (Op0 == 1) */
+	return emulate_sys_instr(vcpu, &params);
 }
 
 /******************************************************************************
@@ -3954,6 +3982,7 @@ int __init kvm_sys_reg_table_init(void)
 	valid &= check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true);
 	valid &= check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true);
 	valid &= check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false);
+	valid &= check_sysreg_table(sys_insn_descs, ARRAY_SIZE(sys_insn_descs), false);
 
 	if (!valid)
 		return -EINVAL;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 12/25] KVM: arm64: nv: Move system instructions to their own sys_reg_desc array
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

As NV results in a bunch of system instructions being trapped, it makes
sense to pull the system instructions into their own little array, where
they will eventually be joined by AT, TLBI and a bunch of other CMOs.

Based on an initial patch by Jintack Lim.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 59 +++++++++++++++++++++++++++++----------
 1 file changed, 44 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 041b11825578..501de653beb5 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2197,16 +2197,6 @@ static u64 reset_hcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
  * guest...
  */
 static const struct sys_reg_desc sys_reg_descs[] = {
-	{ SYS_DESC(SYS_DC_ISW), access_dcsw },
-	{ SYS_DESC(SYS_DC_IGSW), access_dcgsw },
-	{ SYS_DESC(SYS_DC_IGDSW), access_dcgsw },
-	{ SYS_DESC(SYS_DC_CSW), access_dcsw },
-	{ SYS_DESC(SYS_DC_CGSW), access_dcgsw },
-	{ SYS_DESC(SYS_DC_CGDSW), access_dcgsw },
-	{ SYS_DESC(SYS_DC_CISW), access_dcsw },
-	{ SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
-	{ SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
-
 	DBG_BCR_BVR_WCR_WVR_EL1(0),
 	DBG_BCR_BVR_WCR_WVR_EL1(1),
 	{ SYS_DESC(SYS_MDCCINT_EL1), trap_debug_regs, reset_val, MDCCINT_EL1, 0 },
@@ -2738,6 +2728,18 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	EL2_REG(SP_EL2, NULL, reset_unknown, 0),
 };
 
+static struct sys_reg_desc sys_insn_descs[] = {
+	{ SYS_DESC(SYS_DC_ISW), access_dcsw },
+	{ SYS_DESC(SYS_DC_IGSW), access_dcgsw },
+	{ SYS_DESC(SYS_DC_IGDSW), access_dcgsw },
+	{ SYS_DESC(SYS_DC_CSW), access_dcsw },
+	{ SYS_DESC(SYS_DC_CGSW), access_dcgsw },
+	{ SYS_DESC(SYS_DC_CGDSW), access_dcgsw },
+	{ SYS_DESC(SYS_DC_CISW), access_dcsw },
+	{ SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
+	{ SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
+};
+
 static const struct sys_reg_desc *first_idreg;
 
 static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
@@ -3431,6 +3433,24 @@ static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
 	return false;
 }
 
+static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
+{
+	const struct sys_reg_desc *r;
+
+	/* Search from the system instruction table. */
+	r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
+
+	if (likely(r)) {
+		perform_access(vcpu, p, r);
+	} else {
+		kvm_err("Unsupported guest sys instruction at: %lx\n",
+			*vcpu_pc(vcpu));
+		print_sys_reg_instr(p);
+		kvm_inject_undefined(vcpu);
+	}
+	return 1;
+}
+
 static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
 {
 	const struct sys_reg_desc *idreg = first_idreg;
@@ -3478,7 +3498,8 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
 }
 
 /**
- * kvm_handle_sys_reg -- handles a mrs/msr trap on a guest sys_reg access
+ * kvm_handle_sys_reg -- handles a system instruction or mrs/msr instruction
+ *			 trap on a guest execution
  * @vcpu: The VCPU pointer
  */
 int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
@@ -3495,12 +3516,19 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
 	params = esr_sys64_to_params(esr);
 	params.regval = vcpu_get_reg(vcpu, Rt);
 
-	if (!emulate_sys_reg(vcpu, &params))
+	/* System register? */
+	if (params.Op0 == 2 || params.Op0 == 3) {
+		if (!emulate_sys_reg(vcpu, &params))
+			return 1;
+
+		if (!params.is_write)
+			vcpu_set_reg(vcpu, Rt, params.regval);
+
 		return 1;
+	}
 
-	if (!params.is_write)
-		vcpu_set_reg(vcpu, Rt, params.regval);
-	return 1;
+	/* Hints, PSTATE (Op0 == 0) and System instructions (Op0 == 1) */
+	return emulate_sys_instr(vcpu, &params);
 }
 
 /******************************************************************************
@@ -3954,6 +3982,7 @@ int __init kvm_sys_reg_table_init(void)
 	valid &= check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true);
 	valid &= check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true);
 	valid &= check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false);
+	valid &= check_sysreg_table(sys_insn_descs, ARRAY_SIZE(sys_insn_descs), false);
 
 	if (!valid)
 		return -EINVAL;
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 13/25] KVM: arm64: Always populate the trap configuration xarray
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

As we are going to rely more and more on the global xarray that
contains the trap configuration, always populate it, even in the
non-NV case.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 501de653beb5..77cd818c23b0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3997,8 +3997,5 @@ int __init kvm_sys_reg_table_init(void)
 	if (!first_idreg)
 		return -EINVAL;
 
-	if (kvm_get_mode() == KVM_MODE_NV)
-		return populate_nv_trap_config();
-
-	return 0;
+	return populate_nv_trap_config();
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 13/25] KVM: arm64: Always populate the trap configuration xarray
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

As we are going to rely more and more on the global xarray that
contains the trap configuration, always populate it, even in the
non-NV case.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 501de653beb5..77cd818c23b0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3997,8 +3997,5 @@ int __init kvm_sys_reg_table_init(void)
 	if (!first_idreg)
 		return -EINVAL;
 
-	if (kvm_get_mode() == KVM_MODE_NV)
-		return populate_nv_trap_config();
-
-	return 0;
+	return populate_nv_trap_config();
 }
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 14/25] KVM: arm64: Register AArch64 system register entries with the sysreg xarray
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

In order to reduce the number of lookups that we have to perform
when handling a sysreg, register each AArch64 sysreg descriptor
with the global xarray. The index of the descriptor is stored
as a 10 bit field in the data word.

Subsequent patches will retrieve and use the stored index.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  3 +++
 arch/arm64/kvm/emulate-nested.c   | 39 +++++++++++++++++++++++++++++--
 arch/arm64/kvm/sys_regs.c         | 11 ++++++++-
 3 files changed, 50 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fe35c59214ad..e7a6219f2929 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1083,6 +1083,9 @@ int kvm_handle_cp10_id(struct kvm_vcpu *vcpu);
 void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
 
 int __init kvm_sys_reg_table_init(void);
+struct sys_reg_desc;
+int __init populate_sysreg_config(const struct sys_reg_desc *sr,
+				  unsigned int idx);
 int __init populate_nv_trap_config(void);
 
 bool lock_all_vcpus(struct kvm *kvm);
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 59622636b723..342d43b66fda 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -427,12 +427,14 @@ static const complex_condition_check ccc[] = {
  * [19:14]	bit number in the FGT register (6 bits)
  * [20]		trap polarity (1 bit)
  * [25:21]	FG filter (5 bits)
- * [62:26]	Unused (37 bits)
+ * [35:26]	Main SysReg table index (10 bits)
+ * [62:36]	Unused (27 bits)
  * [63]		RES0 - Must be zero, as lost on insertion in the xarray
  */
 #define TC_CGT_BITS	10
 #define TC_FGT_BITS	4
 #define TC_FGF_BITS	5
+#define TC_MSR_BITS	10
 
 union trap_config {
 	u64	val;
@@ -442,7 +444,8 @@ union trap_config {
 		unsigned long	bit:6;		 /* Bit number */
 		unsigned long	pol:1;		 /* Polarity */
 		unsigned long	fgf:TC_FGF_BITS; /* Fine Grained Filter */
-		unsigned long	unused:37;	 /* Unused, should be zero */
+		unsigned long	msr:TC_MSR_BITS; /* Main SysReg index */
+		unsigned long	unused:27;	 /* Unused, should be zero */
 		unsigned long	mbz:1;		 /* Must Be Zero */
 	};
 };
@@ -1862,6 +1865,38 @@ int __init populate_nv_trap_config(void)
 	return ret;
 }
 
+int __init populate_sysreg_config(const struct sys_reg_desc *sr,
+				  unsigned int idx)
+{
+	union trap_config tc;
+	u32 encoding;
+	void *ret;
+
+	/*
+	 * 0 is a valid value for the index, but not for the storage.
+	 * We'll store (idx+1), so check against an offset'd limit.
+	 */
+	if (idx >= (BIT(TC_MSR_BITS) - 1)) {
+		kvm_err("sysreg %s (%d) out of range\n", sr->name, idx);
+		return -EINVAL;
+	}
+
+	encoding = sys_reg(sr->Op0, sr->Op1, sr->CRn, sr->CRm, sr->Op2);
+	tc = get_trap_config(encoding);
+
+	if (tc.msr) {
+		kvm_err("sysreg %s (%d) duplicate entry (%d)\n",
+			sr->name, idx - 1, tc.msr);
+		return -EINVAL;
+	}
+
+	tc.msr = idx + 1;
+	ret = xa_store(&sr_forward_xa, encoding,
+		       xa_mk_value(tc.val), GFP_KERNEL);
+
+	return xa_err(ret);
+}
+
 static enum trap_behaviour get_behaviour(struct kvm_vcpu *vcpu,
 					 const struct trap_bits *tb)
 {
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 77cd818c23b0..65319193e443 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3974,6 +3974,7 @@ int __init kvm_sys_reg_table_init(void)
 	struct sys_reg_params params;
 	bool valid = true;
 	unsigned int i;
+	int ret = 0;
 
 	/* Make sure tables are unique and in order. */
 	valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false);
@@ -3997,5 +3998,13 @@ int __init kvm_sys_reg_table_init(void)
 	if (!first_idreg)
 		return -EINVAL;
 
-	return populate_nv_trap_config();
+	ret = populate_nv_trap_config();
+
+	for (i = 0; !ret && i < ARRAY_SIZE(sys_reg_descs); i++)
+		ret = populate_sysreg_config(sys_reg_descs + i, i);
+
+	for (i = 0; !ret && i < ARRAY_SIZE(sys_insn_descs); i++)
+		ret = populate_sysreg_config(sys_insn_descs + i, i);
+
+	return ret;
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 14/25] KVM: arm64: Register AArch64 system register entries with the sysreg xarray
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

In order to reduce the number of lookups that we have to perform
when handling a sysreg, register each AArch64 sysreg descriptor
with the global xarray. The index of the descriptor is stored
as a 10 bit field in the data word.

Subsequent patches will retrieve and use the stored index.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  3 +++
 arch/arm64/kvm/emulate-nested.c   | 39 +++++++++++++++++++++++++++++--
 arch/arm64/kvm/sys_regs.c         | 11 ++++++++-
 3 files changed, 50 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fe35c59214ad..e7a6219f2929 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1083,6 +1083,9 @@ int kvm_handle_cp10_id(struct kvm_vcpu *vcpu);
 void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
 
 int __init kvm_sys_reg_table_init(void);
+struct sys_reg_desc;
+int __init populate_sysreg_config(const struct sys_reg_desc *sr,
+				  unsigned int idx);
 int __init populate_nv_trap_config(void);
 
 bool lock_all_vcpus(struct kvm *kvm);
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 59622636b723..342d43b66fda 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -427,12 +427,14 @@ static const complex_condition_check ccc[] = {
  * [19:14]	bit number in the FGT register (6 bits)
  * [20]		trap polarity (1 bit)
  * [25:21]	FG filter (5 bits)
- * [62:26]	Unused (37 bits)
+ * [35:26]	Main SysReg table index (10 bits)
+ * [62:36]	Unused (27 bits)
  * [63]		RES0 - Must be zero, as lost on insertion in the xarray
  */
 #define TC_CGT_BITS	10
 #define TC_FGT_BITS	4
 #define TC_FGF_BITS	5
+#define TC_MSR_BITS	10
 
 union trap_config {
 	u64	val;
@@ -442,7 +444,8 @@ union trap_config {
 		unsigned long	bit:6;		 /* Bit number */
 		unsigned long	pol:1;		 /* Polarity */
 		unsigned long	fgf:TC_FGF_BITS; /* Fine Grained Filter */
-		unsigned long	unused:37;	 /* Unused, should be zero */
+		unsigned long	msr:TC_MSR_BITS; /* Main SysReg index */
+		unsigned long	unused:27;	 /* Unused, should be zero */
 		unsigned long	mbz:1;		 /* Must Be Zero */
 	};
 };
@@ -1862,6 +1865,38 @@ int __init populate_nv_trap_config(void)
 	return ret;
 }
 
+int __init populate_sysreg_config(const struct sys_reg_desc *sr,
+				  unsigned int idx)
+{
+	union trap_config tc;
+	u32 encoding;
+	void *ret;
+
+	/*
+	 * 0 is a valid value for the index, but not for the storage.
+	 * We'll store (idx+1), so check against an offset'd limit.
+	 */
+	if (idx >= (BIT(TC_MSR_BITS) - 1)) {
+		kvm_err("sysreg %s (%d) out of range\n", sr->name, idx);
+		return -EINVAL;
+	}
+
+	encoding = sys_reg(sr->Op0, sr->Op1, sr->CRn, sr->CRm, sr->Op2);
+	tc = get_trap_config(encoding);
+
+	if (tc.msr) {
+		kvm_err("sysreg %s (%d) duplicate entry (%d)\n",
+			sr->name, idx - 1, tc.msr);
+		return -EINVAL;
+	}
+
+	tc.msr = idx + 1;
+	ret = xa_store(&sr_forward_xa, encoding,
+		       xa_mk_value(tc.val), GFP_KERNEL);
+
+	return xa_err(ret);
+}
+
 static enum trap_behaviour get_behaviour(struct kvm_vcpu *vcpu,
 					 const struct trap_bits *tb)
 {
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 77cd818c23b0..65319193e443 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3974,6 +3974,7 @@ int __init kvm_sys_reg_table_init(void)
 	struct sys_reg_params params;
 	bool valid = true;
 	unsigned int i;
+	int ret = 0;
 
 	/* Make sure tables are unique and in order. */
 	valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false);
@@ -3997,5 +3998,13 @@ int __init kvm_sys_reg_table_init(void)
 	if (!first_idreg)
 		return -EINVAL;
 
-	return populate_nv_trap_config();
+	ret = populate_nv_trap_config();
+
+	for (i = 0; !ret && i < ARRAY_SIZE(sys_reg_descs); i++)
+		ret = populate_sysreg_config(sys_reg_descs + i, i);
+
+	for (i = 0; !ret && i < ARRAY_SIZE(sys_insn_descs); i++)
+		ret = populate_sysreg_config(sys_insn_descs + i, i);
+
+	return ret;
 }
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 15/25] KVM: arm64: Use the xarray as the primary sysreg/sysinsn walker
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Since we always start sysreg/sysinsn handling by searching the
xarray, use it as the source of the index in the correct sys_reg_desc
array.

This allows some cleanup, such as moving the handling of unknown
sysregs in a single location.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_nested.h |  2 +-
 arch/arm64/kvm/emulate-nested.c     | 36 +++++++++++-----
 arch/arm64/kvm/sys_regs.c           | 64 +++++++++--------------------
 3 files changed, 46 insertions(+), 56 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 4882905357f4..68465f87d308 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -60,7 +60,7 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
 	return ttbr0 & ~GENMASK_ULL(63, 48);
 }
 
-extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
+extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_idx);
 
 int kvm_init_nv_sysregs(struct kvm *kvm);
 
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 342d43b66fda..54ab4d240fc6 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2001,7 +2001,7 @@ static bool check_fgt_bit(struct kvm *kvm, bool is_read,
 	return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
 }
 
-bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
+bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_index)
 {
 	union trap_config tc;
 	enum trap_behaviour b;
@@ -2009,9 +2009,6 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 	u32 sysreg;
 	u64 esr, val;
 
-	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
-		return false;
-
 	esr = kvm_vcpu_get_esr(vcpu);
 	sysreg = esr_sys64_to_sysreg(esr);
 	is_read = (esr & ESR_ELx_SYS64_ISS_DIR_MASK) == ESR_ELx_SYS64_ISS_DIR_READ;
@@ -2022,13 +2019,16 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 	 * A value of 0 for the whole entry means that we know nothing
 	 * for this sysreg, and that it cannot be re-injected into the
 	 * nested hypervisor. In this situation, let's cut it short.
-	 *
-	 * Note that ultimately, we could also make use of the xarray
-	 * to store the index of the sysreg in the local descriptor
-	 * array, avoiding another search... Hint, hint...
 	 */
 	if (!tc.val)
-		return false;
+		goto local;
+
+	/*
+	 * If we're not nesting, immediately return to the caller, with the
+	 * sysreg index, should we have it.
+	 */
+	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
+		goto local;
 
 	switch ((enum fgt_group_id)tc.fgt) {
 	case __NO_FGT_GROUP__:
@@ -2070,7 +2070,7 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 	case __NR_FGT_GROUP_IDS__:
 		/* Something is really wrong, bail out */
 		WARN_ONCE(1, "__NR_FGT_GROUP_IDS__");
-		return false;
+		goto local;
 	}
 
 	if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(vcpu->kvm, is_read,
@@ -2083,6 +2083,22 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 	    ((b & BEHAVE_FORWARD_WRITE) && !is_read))
 		goto inject;
 
+local:
+	if (!tc.msr) {
+		struct sys_reg_params params;
+
+		params = esr_sys64_to_params(esr);
+
+		// IMPDEF range. See ARM DDI 0487E.a, section D12.3.2
+		if (!(params.Op0 == 3 && (params.CRn & 0b1011) == 0b1011))
+			print_sys_reg_msg(&params,
+					  "Unsupported guest access at: %lx\n",
+					  *vcpu_pc(vcpu));
+		kvm_inject_undefined(vcpu);
+		return true;
+	}
+
+	*sr_index = tc.msr - 1;
 	return false;
 
 inject:
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 65319193e443..794d1f8c9bfe 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3397,12 +3397,6 @@ int kvm_handle_cp14_32(struct kvm_vcpu *vcpu)
 	return kvm_handle_cp_32(vcpu, &params, cp14_regs, ARRAY_SIZE(cp14_regs));
 }
 
-static bool is_imp_def_sys_reg(struct sys_reg_params *params)
-{
-	// See ARM DDI 0487E.a, section D12.3.2
-	return params->Op0 == 3 && (params->CRn & 0b1011) == 0b1011;
-}
-
 /**
  * emulate_sys_reg - Emulate a guest access to an AArch64 system register
  * @vcpu: The VCPU pointer
@@ -3411,44 +3405,22 @@ static bool is_imp_def_sys_reg(struct sys_reg_params *params)
  * Return: true if the system register access was successful, false otherwise.
  */
 static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
-			   struct sys_reg_params *params)
+			    struct sys_reg_params *params)
 {
 	const struct sys_reg_desc *r;
 
 	r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
-
 	if (likely(r)) {
 		perform_access(vcpu, params, r);
 		return true;
 	}
 
-	if (is_imp_def_sys_reg(params)) {
-		kvm_inject_undefined(vcpu);
-	} else {
-		print_sys_reg_msg(params,
-				  "Unsupported guest sys_reg access at: %lx [%08lx]\n",
-				  *vcpu_pc(vcpu), *vcpu_cpsr(vcpu));
-		kvm_inject_undefined(vcpu);
-	}
-	return false;
-}
-
-static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
-{
-	const struct sys_reg_desc *r;
-
-	/* Search from the system instruction table. */
-	r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
+	print_sys_reg_msg(params,
+			  "Unsupported guest sys_reg access at: %lx [%08lx]\n",
+			  *vcpu_pc(vcpu), *vcpu_cpsr(vcpu));
+	kvm_inject_undefined(vcpu);
 
-	if (likely(r)) {
-		perform_access(vcpu, p, r);
-	} else {
-		kvm_err("Unsupported guest sys instruction at: %lx\n",
-			*vcpu_pc(vcpu));
-		print_sys_reg_instr(p);
-		kvm_inject_undefined(vcpu);
-	}
-	return 1;
+	return false;
 }
 
 static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
@@ -3504,31 +3476,33 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
  */
 int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
 {
+	const struct sys_reg_desc *desc = NULL;
 	struct sys_reg_params params;
 	unsigned long esr = kvm_vcpu_get_esr(vcpu);
 	int Rt = kvm_vcpu_sys_get_rt(vcpu);
+	int sr_idx;
 
 	trace_kvm_handle_sys_reg(esr);
 
-	if (__check_nv_sr_forward(vcpu))
+	if (__check_nv_sr_forward(vcpu, &sr_idx))
 		return 1;
 
 	params = esr_sys64_to_params(esr);
 	params.regval = vcpu_get_reg(vcpu, Rt);
 
-	/* System register? */
-	if (params.Op0 == 2 || params.Op0 == 3) {
-		if (!emulate_sys_reg(vcpu, &params))
-			return 1;
+	if (params.Op0 == 2 || params.Op0 == 3)
+		desc = &sys_reg_descs[sr_idx];
+	else
+		desc = &sys_insn_descs[sr_idx];
 
-		if (!params.is_write)
-			vcpu_set_reg(vcpu, Rt, params.regval);
+	perform_access(vcpu, &params, desc);
 
-		return 1;
-	}
+	/* Read from system register? */
+	if (!params.is_write &&
+	    (params.Op0 == 2 || params.Op0 == 3))
+		vcpu_set_reg(vcpu, Rt, params.regval);
 
-	/* Hints, PSTATE (Op0 == 0) and System instructions (Op0 == 1) */
-	return emulate_sys_instr(vcpu, &params);
+	return 1;
 }
 
 /******************************************************************************
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 15/25] KVM: arm64: Use the xarray as the primary sysreg/sysinsn walker
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Since we always start sysreg/sysinsn handling by searching the
xarray, use it as the source of the index in the correct sys_reg_desc
array.

This allows some cleanup, such as moving the handling of unknown
sysregs in a single location.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_nested.h |  2 +-
 arch/arm64/kvm/emulate-nested.c     | 36 +++++++++++-----
 arch/arm64/kvm/sys_regs.c           | 64 +++++++++--------------------
 3 files changed, 46 insertions(+), 56 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 4882905357f4..68465f87d308 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -60,7 +60,7 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
 	return ttbr0 & ~GENMASK_ULL(63, 48);
 }
 
-extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
+extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_idx);
 
 int kvm_init_nv_sysregs(struct kvm *kvm);
 
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 342d43b66fda..54ab4d240fc6 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2001,7 +2001,7 @@ static bool check_fgt_bit(struct kvm *kvm, bool is_read,
 	return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
 }
 
-bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
+bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_index)
 {
 	union trap_config tc;
 	enum trap_behaviour b;
@@ -2009,9 +2009,6 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 	u32 sysreg;
 	u64 esr, val;
 
-	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
-		return false;
-
 	esr = kvm_vcpu_get_esr(vcpu);
 	sysreg = esr_sys64_to_sysreg(esr);
 	is_read = (esr & ESR_ELx_SYS64_ISS_DIR_MASK) == ESR_ELx_SYS64_ISS_DIR_READ;
@@ -2022,13 +2019,16 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 	 * A value of 0 for the whole entry means that we know nothing
 	 * for this sysreg, and that it cannot be re-injected into the
 	 * nested hypervisor. In this situation, let's cut it short.
-	 *
-	 * Note that ultimately, we could also make use of the xarray
-	 * to store the index of the sysreg in the local descriptor
-	 * array, avoiding another search... Hint, hint...
 	 */
 	if (!tc.val)
-		return false;
+		goto local;
+
+	/*
+	 * If we're not nesting, immediately return to the caller, with the
+	 * sysreg index, should we have it.
+	 */
+	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
+		goto local;
 
 	switch ((enum fgt_group_id)tc.fgt) {
 	case __NO_FGT_GROUP__:
@@ -2070,7 +2070,7 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 	case __NR_FGT_GROUP_IDS__:
 		/* Something is really wrong, bail out */
 		WARN_ONCE(1, "__NR_FGT_GROUP_IDS__");
-		return false;
+		goto local;
 	}
 
 	if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(vcpu->kvm, is_read,
@@ -2083,6 +2083,22 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
 	    ((b & BEHAVE_FORWARD_WRITE) && !is_read))
 		goto inject;
 
+local:
+	if (!tc.msr) {
+		struct sys_reg_params params;
+
+		params = esr_sys64_to_params(esr);
+
+		// IMPDEF range. See ARM DDI 0487E.a, section D12.3.2
+		if (!(params.Op0 == 3 && (params.CRn & 0b1011) == 0b1011))
+			print_sys_reg_msg(&params,
+					  "Unsupported guest access at: %lx\n",
+					  *vcpu_pc(vcpu));
+		kvm_inject_undefined(vcpu);
+		return true;
+	}
+
+	*sr_index = tc.msr - 1;
 	return false;
 
 inject:
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 65319193e443..794d1f8c9bfe 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3397,12 +3397,6 @@ int kvm_handle_cp14_32(struct kvm_vcpu *vcpu)
 	return kvm_handle_cp_32(vcpu, &params, cp14_regs, ARRAY_SIZE(cp14_regs));
 }
 
-static bool is_imp_def_sys_reg(struct sys_reg_params *params)
-{
-	// See ARM DDI 0487E.a, section D12.3.2
-	return params->Op0 == 3 && (params->CRn & 0b1011) == 0b1011;
-}
-
 /**
  * emulate_sys_reg - Emulate a guest access to an AArch64 system register
  * @vcpu: The VCPU pointer
@@ -3411,44 +3405,22 @@ static bool is_imp_def_sys_reg(struct sys_reg_params *params)
  * Return: true if the system register access was successful, false otherwise.
  */
 static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
-			   struct sys_reg_params *params)
+			    struct sys_reg_params *params)
 {
 	const struct sys_reg_desc *r;
 
 	r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
-
 	if (likely(r)) {
 		perform_access(vcpu, params, r);
 		return true;
 	}
 
-	if (is_imp_def_sys_reg(params)) {
-		kvm_inject_undefined(vcpu);
-	} else {
-		print_sys_reg_msg(params,
-				  "Unsupported guest sys_reg access at: %lx [%08lx]\n",
-				  *vcpu_pc(vcpu), *vcpu_cpsr(vcpu));
-		kvm_inject_undefined(vcpu);
-	}
-	return false;
-}
-
-static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
-{
-	const struct sys_reg_desc *r;
-
-	/* Search from the system instruction table. */
-	r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
+	print_sys_reg_msg(params,
+			  "Unsupported guest sys_reg access at: %lx [%08lx]\n",
+			  *vcpu_pc(vcpu), *vcpu_cpsr(vcpu));
+	kvm_inject_undefined(vcpu);
 
-	if (likely(r)) {
-		perform_access(vcpu, p, r);
-	} else {
-		kvm_err("Unsupported guest sys instruction at: %lx\n",
-			*vcpu_pc(vcpu));
-		print_sys_reg_instr(p);
-		kvm_inject_undefined(vcpu);
-	}
-	return 1;
+	return false;
 }
 
 static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
@@ -3504,31 +3476,33 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
  */
 int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
 {
+	const struct sys_reg_desc *desc = NULL;
 	struct sys_reg_params params;
 	unsigned long esr = kvm_vcpu_get_esr(vcpu);
 	int Rt = kvm_vcpu_sys_get_rt(vcpu);
+	int sr_idx;
 
 	trace_kvm_handle_sys_reg(esr);
 
-	if (__check_nv_sr_forward(vcpu))
+	if (__check_nv_sr_forward(vcpu, &sr_idx))
 		return 1;
 
 	params = esr_sys64_to_params(esr);
 	params.regval = vcpu_get_reg(vcpu, Rt);
 
-	/* System register? */
-	if (params.Op0 == 2 || params.Op0 == 3) {
-		if (!emulate_sys_reg(vcpu, &params))
-			return 1;
+	if (params.Op0 == 2 || params.Op0 == 3)
+		desc = &sys_reg_descs[sr_idx];
+	else
+		desc = &sys_insn_descs[sr_idx];
 
-		if (!params.is_write)
-			vcpu_set_reg(vcpu, Rt, params.regval);
+	perform_access(vcpu, &params, desc);
 
-		return 1;
-	}
+	/* Read from system register? */
+	if (!params.is_write &&
+	    (params.Op0 == 2 || params.Op0 == 3))
+		vcpu_set_reg(vcpu, Rt, params.regval);
 
-	/* Hints, PSTATE (Op0 == 0) and System instructions (Op0 == 1) */
-	return emulate_sys_instr(vcpu, &params);
+	return 1;
 }
 
 /******************************************************************************
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 16/25] KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap()
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

__check_nv_sr_forward() is not specific to NV anymore, and does
a lot more. Rename it to triage_sysreg_trap(), making it plain
that its role is to handle where an exception is to be handled.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_nested.h | 1 -
 arch/arm64/kvm/emulate-nested.c     | 2 +-
 arch/arm64/kvm/sys_regs.c           | 2 +-
 arch/arm64/kvm/sys_regs.h           | 2 ++
 4 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 68465f87d308..c77d795556e1 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -60,7 +60,6 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
 	return ttbr0 & ~GENMASK_ULL(63, 48);
 }
 
-extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_idx);
 
 int kvm_init_nv_sysregs(struct kvm *kvm);
 
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 54ab4d240fc6..b39ced4ea331 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2001,7 +2001,7 @@ static bool check_fgt_bit(struct kvm *kvm, bool is_read,
 	return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
 }
 
-bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_index)
+bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
 {
 	union trap_config tc;
 	enum trap_behaviour b;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 794d1f8c9bfe..c48bc2577162 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3484,7 +3484,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
 
 	trace_kvm_handle_sys_reg(esr);
 
-	if (__check_nv_sr_forward(vcpu, &sr_idx))
+	if (triage_sysreg_trap(vcpu, &sr_idx))
 		return 1;
 
 	params = esr_sys64_to_params(esr);
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index c65c129b3500..997eea21ba2a 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -233,6 +233,8 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
 int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
 			 const struct sys_reg_desc table[], unsigned int num);
 
+bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index);
+
 #define AA32(_x)	.aarch32_map = AA32_##_x
 #define Op0(_x) 	.Op0 = _x
 #define Op1(_x) 	.Op1 = _x
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 16/25] KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap()
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

__check_nv_sr_forward() is not specific to NV anymore, and does
a lot more. Rename it to triage_sysreg_trap(), making it plain
that its role is to handle where an exception is to be handled.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_nested.h | 1 -
 arch/arm64/kvm/emulate-nested.c     | 2 +-
 arch/arm64/kvm/sys_regs.c           | 2 +-
 arch/arm64/kvm/sys_regs.h           | 2 ++
 4 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 68465f87d308..c77d795556e1 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -60,7 +60,6 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
 	return ttbr0 & ~GENMASK_ULL(63, 48);
 }
 
-extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_idx);
 
 int kvm_init_nv_sysregs(struct kvm *kvm);
 
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 54ab4d240fc6..b39ced4ea331 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2001,7 +2001,7 @@ static bool check_fgt_bit(struct kvm *kvm, bool is_read,
 	return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
 }
 
-bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_index)
+bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
 {
 	union trap_config tc;
 	enum trap_behaviour b;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 794d1f8c9bfe..c48bc2577162 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3484,7 +3484,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
 
 	trace_kvm_handle_sys_reg(esr);
 
-	if (__check_nv_sr_forward(vcpu, &sr_idx))
+	if (triage_sysreg_trap(vcpu, &sr_idx))
 		return 1;
 
 	params = esr_sys64_to_params(esr);
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index c65c129b3500..997eea21ba2a 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -233,6 +233,8 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
 int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
 			 const struct sys_reg_desc table[], unsigned int num);
 
+bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index);
+
 #define AA32(_x)	.aarch32_map = AA32_##_x
 #define Op0(_x) 	.Op0 = _x
 #define Op1(_x) 	.Op1 = _x
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 17/25] KVM: arm64: Add Fine-Grained UNDEF tracking information
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

In order to efficiently handle system register access being disabled,
and this resulting in an UNDEF exception being injected, we introduce
the (slightly dubious) concept of Fine-Grained UNDEF, modeled after
the architectural Fine-Grained Traps.

For each FGT group, we keep a 64 bit word that has the exact same
bit assignment as the corresponding FGT register, where a 1 indicates
that trapping this register should result in an UNDEF exception being
reinjected.

So far, nothing populates this information, nor sets the corresponding
trap bits.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 21 +++++++++++++++++++++
 arch/arm64/kvm/emulate-nested.c   | 12 ------------
 2 files changed, 21 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e7a6219f2929..4e0ac507ca01 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -240,9 +240,30 @@ static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
 
 struct kvm_sysreg_masks;
 
+enum fgt_group_id {
+	__NO_FGT_GROUP__,
+	HFGxTR_GROUP,
+	HDFGRTR_GROUP,
+	HDFGWTR_GROUP = HDFGRTR_GROUP,
+	HFGITR_GROUP,
+	HAFGRTR_GROUP,
+
+	/* Must be last */
+	__NR_FGT_GROUP_IDS__
+};
+
 struct kvm_arch {
 	struct kvm_s2_mmu mmu;
 
+	/*
+	 * Fine-Grained UNDEF, mimicking the FGT layout defined by the
+	 * architecture. We track them globally, as we present the
+	 * same feature-set to all vcpus.
+	 *
+	 * Index 0 is currently spare.
+	 */
+	u64 fgu[__NR_FGT_GROUP_IDS__];
+
 	/* Interrupt controller */
 	struct vgic_dist	vgic;
 
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index b39ced4ea331..539b3913628d 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1009,18 +1009,6 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
 
 static DEFINE_XARRAY(sr_forward_xa);
 
-enum fgt_group_id {
-	__NO_FGT_GROUP__,
-	HFGxTR_GROUP,
-	HDFGRTR_GROUP,
-	HDFGWTR_GROUP = HDFGRTR_GROUP,
-	HFGITR_GROUP,
-	HAFGRTR_GROUP,
-
-	/* Must be last */
-	__NR_FGT_GROUP_IDS__
-};
-
 enum fg_filter_id {
 	__NO_FGF__,
 	HCRX_FGTnXS,
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 17/25] KVM: arm64: Add Fine-Grained UNDEF tracking information
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

In order to efficiently handle system register access being disabled,
and this resulting in an UNDEF exception being injected, we introduce
the (slightly dubious) concept of Fine-Grained UNDEF, modeled after
the architectural Fine-Grained Traps.

For each FGT group, we keep a 64 bit word that has the exact same
bit assignment as the corresponding FGT register, where a 1 indicates
that trapping this register should result in an UNDEF exception being
reinjected.

So far, nothing populates this information, nor sets the corresponding
trap bits.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 21 +++++++++++++++++++++
 arch/arm64/kvm/emulate-nested.c   | 12 ------------
 2 files changed, 21 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e7a6219f2929..4e0ac507ca01 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -240,9 +240,30 @@ static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
 
 struct kvm_sysreg_masks;
 
+enum fgt_group_id {
+	__NO_FGT_GROUP__,
+	HFGxTR_GROUP,
+	HDFGRTR_GROUP,
+	HDFGWTR_GROUP = HDFGRTR_GROUP,
+	HFGITR_GROUP,
+	HAFGRTR_GROUP,
+
+	/* Must be last */
+	__NR_FGT_GROUP_IDS__
+};
+
 struct kvm_arch {
 	struct kvm_s2_mmu mmu;
 
+	/*
+	 * Fine-Grained UNDEF, mimicking the FGT layout defined by the
+	 * architecture. We track them globally, as we present the
+	 * same feature-set to all vcpus.
+	 *
+	 * Index 0 is currently spare.
+	 */
+	u64 fgu[__NR_FGT_GROUP_IDS__];
+
 	/* Interrupt controller */
 	struct vgic_dist	vgic;
 
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index b39ced4ea331..539b3913628d 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1009,18 +1009,6 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
 
 static DEFINE_XARRAY(sr_forward_xa);
 
-enum fgt_group_id {
-	__NO_FGT_GROUP__,
-	HFGxTR_GROUP,
-	HDFGRTR_GROUP,
-	HDFGWTR_GROUP = HDFGRTR_GROUP,
-	HFGITR_GROUP,
-	HAFGRTR_GROUP,
-
-	/* Must be last */
-	__NR_FGT_GROUP_IDS__
-};
-
 enum fg_filter_id {
 	__NO_FGF__,
 	HCRX_FGTnXS,
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 18/25] KVM: arm64: Propagate and handle Fine-Grained UNDEF bits
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

In order to correctly honor our FGU bits, they must be converted
into a set of FGT bits. They get merged as part of the existing
FGT setting.

Similarly, the UNDEF injection phase takes place when handling
the trap.

This results in a bit of rework in the FGT macros in order to
help with the code generation, as burying per-CPU accesses in
macros results in a lot of expansion, not to mention the vcpu->kvm
access on nvhe (kern_hyp_va() is not optimisation-friendly).

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c         | 11 ++++
 arch/arm64/kvm/hyp/include/hyp/switch.h | 81 +++++++++++++++++++------
 2 files changed, 72 insertions(+), 20 deletions(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 539b3913628d..f64d1809fe79 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2011,6 +2011,17 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
 	if (!tc.val)
 		goto local;
 
+	/*
+	 * If a sysreg can be trapped using a FGT, first check whether we
+	 * trap for the purpose of forbidding the feature. In that case,
+	 * inject an UNDEF.
+	 */
+	if (tc.fgt != __NO_FGT_GROUP__ &&
+	    (vcpu->kvm->arch.fgu[tc.fgt] & BIT(tc.bit))) {
+		kvm_inject_undefined(vcpu);
+		return true;
+	}
+
 	/*
 	 * If we're not nesting, immediately return to the caller, with the
 	 * sysreg index, should we have it.
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a038320cdb08..a09149fd91ed 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -79,14 +79,48 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
 		clr |= ~hfg & __ ## reg ## _nMASK; 			\
 	} while(0)
 
-#define update_fgt_traps_cs(vcpu, reg, clr, set)			\
+#define reg_to_fgt_group_id(reg)					\
+	({								\
+		enum fgt_group_id id;					\
+		switch(reg) {						\
+		case HFGRTR_EL2:					\
+		case HFGWTR_EL2:					\
+			id = HFGxTR_GROUP;				\
+			break;						\
+		case HFGITR_EL2:					\
+			id = HFGITR_GROUP;				\
+			break;						\
+		case HDFGRTR_EL2:					\
+		case HDFGWTR_EL2:					\
+			id = HDFGRTR_GROUP;				\
+			break;						\
+		case HAFGRTR_EL2:					\
+			id = HAFGRTR_GROUP;				\
+			break;						\
+		default:						\
+			BUILD_BUG_ON(1);				\
+		}							\
+									\
+		id;							\
+	})
+
+#define compute_undef_clr_set(vcpu, kvm, reg, clr, set)			\
+	do {								\
+		u64 hfg = kvm->arch.fgu[reg_to_fgt_group_id(reg)];	\
+		set |= hfg & __ ## reg ## _MASK;			\
+		clr |= hfg & __ ## reg ## _nMASK; 			\
+	} while(0)
+
+#define update_fgt_traps_cs(hctxt, vcpu, kvm, reg, clr, set)		\
 	do {								\
-		struct kvm_cpu_context *hctxt =				\
-			&this_cpu_ptr(&kvm_host_data)->host_ctxt;	\
 		u64 c = 0, s = 0;					\
 									\
 		ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg);	\
-		compute_clr_set(vcpu, reg, c, s);			\
+		if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))		\
+			compute_clr_set(vcpu, reg, c, s);		\
+									\
+		compute_undef_clr_set(vcpu, kvm, reg, c, s);		\
+									\
 		s |= set;						\
 		c |= clr;						\
 		if (c || s) {						\
@@ -97,8 +131,8 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
 		}							\
 	} while(0)
 
-#define update_fgt_traps(vcpu, reg)		\
-	update_fgt_traps_cs(vcpu, reg, 0, 0)
+#define update_fgt_traps(hctxt, vcpu, kvm, reg)		\
+	update_fgt_traps_cs(hctxt, vcpu, kvm, reg, 0, 0)
 
 /*
  * Validate the fine grain trap masks.
@@ -122,6 +156,7 @@ static inline bool cpu_has_amu(void)
 static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
+	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0, tmp;
 	u64 r_val, w_val;
 
@@ -157,6 +192,9 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 		compute_clr_set(vcpu, HFGWTR_EL2, w_clr, w_set);
 	}
 
+	compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
+	compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
+
 	/* The default to trap everything not handled or supported in KVM. */
 	tmp = HFGxTR_EL2_nAMAIR2_EL1 | HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nS2POR_EL1 |
 	      HFGxTR_EL2_nPOR_EL1 | HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nACCDATA_EL1;
@@ -172,20 +210,26 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	write_sysreg_s(r_val, SYS_HFGRTR_EL2);
 	write_sysreg_s(w_val, SYS_HFGWTR_EL2);
 
-	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
-		return;
-
-	update_fgt_traps(vcpu, HFGITR_EL2);
-	update_fgt_traps(vcpu, HDFGRTR_EL2);
-	update_fgt_traps(vcpu, HDFGWTR_EL2);
+	update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);
+	update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);
+	update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);
 
 	if (cpu_has_amu())
-		update_fgt_traps(vcpu, HAFGRTR_EL2);
+		update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2);
 }
 
+#define __deactivate_fgt(htcxt, vcpu, kvm, reg)				\
+	do {								\
+		if ((vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) ||	\
+		    kvm->arch.fgu[reg_to_fgt_group_id(reg)])		\
+			write_sysreg_s(ctxt_sys_reg(hctxt, reg),	\
+				       SYS_ ## reg);			\
+	} while(0)
+
 static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
+	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 
 	if (!cpus_have_final_cap(ARM64_HAS_FGT))
 		return;
@@ -193,15 +237,12 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	write_sysreg_s(ctxt_sys_reg(hctxt, HFGRTR_EL2), SYS_HFGRTR_EL2);
 	write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
 
-	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
-		return;
-
-	write_sysreg_s(ctxt_sys_reg(hctxt, HFGITR_EL2), SYS_HFGITR_EL2);
-	write_sysreg_s(ctxt_sys_reg(hctxt, HDFGRTR_EL2), SYS_HDFGRTR_EL2);
-	write_sysreg_s(ctxt_sys_reg(hctxt, HDFGWTR_EL2), SYS_HDFGWTR_EL2);
+	__deactivate_fgt(hctxt, vcpu, kvm, HFGITR_EL2);
+	__deactivate_fgt(hctxt, vcpu, kvm, HDFGRTR_EL2);
+	__deactivate_fgt(hctxt, vcpu, kvm, HDFGWTR_EL2);
 
 	if (cpu_has_amu())
-		write_sysreg_s(ctxt_sys_reg(hctxt, HAFGRTR_EL2), SYS_HAFGRTR_EL2);
+		__deactivate_fgt(hctxt, vcpu, kvm, HAFGRTR_EL2);
 }
 
 static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 18/25] KVM: arm64: Propagate and handle Fine-Grained UNDEF bits
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

In order to correctly honor our FGU bits, they must be converted
into a set of FGT bits. They get merged as part of the existing
FGT setting.

Similarly, the UNDEF injection phase takes place when handling
the trap.

This results in a bit of rework in the FGT macros in order to
help with the code generation, as burying per-CPU accesses in
macros results in a lot of expansion, not to mention the vcpu->kvm
access on nvhe (kern_hyp_va() is not optimisation-friendly).

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c         | 11 ++++
 arch/arm64/kvm/hyp/include/hyp/switch.h | 81 +++++++++++++++++++------
 2 files changed, 72 insertions(+), 20 deletions(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 539b3913628d..f64d1809fe79 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2011,6 +2011,17 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
 	if (!tc.val)
 		goto local;
 
+	/*
+	 * If a sysreg can be trapped using a FGT, first check whether we
+	 * trap for the purpose of forbidding the feature. In that case,
+	 * inject an UNDEF.
+	 */
+	if (tc.fgt != __NO_FGT_GROUP__ &&
+	    (vcpu->kvm->arch.fgu[tc.fgt] & BIT(tc.bit))) {
+		kvm_inject_undefined(vcpu);
+		return true;
+	}
+
 	/*
 	 * If we're not nesting, immediately return to the caller, with the
 	 * sysreg index, should we have it.
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a038320cdb08..a09149fd91ed 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -79,14 +79,48 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
 		clr |= ~hfg & __ ## reg ## _nMASK; 			\
 	} while(0)
 
-#define update_fgt_traps_cs(vcpu, reg, clr, set)			\
+#define reg_to_fgt_group_id(reg)					\
+	({								\
+		enum fgt_group_id id;					\
+		switch(reg) {						\
+		case HFGRTR_EL2:					\
+		case HFGWTR_EL2:					\
+			id = HFGxTR_GROUP;				\
+			break;						\
+		case HFGITR_EL2:					\
+			id = HFGITR_GROUP;				\
+			break;						\
+		case HDFGRTR_EL2:					\
+		case HDFGWTR_EL2:					\
+			id = HDFGRTR_GROUP;				\
+			break;						\
+		case HAFGRTR_EL2:					\
+			id = HAFGRTR_GROUP;				\
+			break;						\
+		default:						\
+			BUILD_BUG_ON(1);				\
+		}							\
+									\
+		id;							\
+	})
+
+#define compute_undef_clr_set(vcpu, kvm, reg, clr, set)			\
+	do {								\
+		u64 hfg = kvm->arch.fgu[reg_to_fgt_group_id(reg)];	\
+		set |= hfg & __ ## reg ## _MASK;			\
+		clr |= hfg & __ ## reg ## _nMASK; 			\
+	} while(0)
+
+#define update_fgt_traps_cs(hctxt, vcpu, kvm, reg, clr, set)		\
 	do {								\
-		struct kvm_cpu_context *hctxt =				\
-			&this_cpu_ptr(&kvm_host_data)->host_ctxt;	\
 		u64 c = 0, s = 0;					\
 									\
 		ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg);	\
-		compute_clr_set(vcpu, reg, c, s);			\
+		if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))		\
+			compute_clr_set(vcpu, reg, c, s);		\
+									\
+		compute_undef_clr_set(vcpu, kvm, reg, c, s);		\
+									\
 		s |= set;						\
 		c |= clr;						\
 		if (c || s) {						\
@@ -97,8 +131,8 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
 		}							\
 	} while(0)
 
-#define update_fgt_traps(vcpu, reg)		\
-	update_fgt_traps_cs(vcpu, reg, 0, 0)
+#define update_fgt_traps(hctxt, vcpu, kvm, reg)		\
+	update_fgt_traps_cs(hctxt, vcpu, kvm, reg, 0, 0)
 
 /*
  * Validate the fine grain trap masks.
@@ -122,6 +156,7 @@ static inline bool cpu_has_amu(void)
 static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
+	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0, tmp;
 	u64 r_val, w_val;
 
@@ -157,6 +192,9 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 		compute_clr_set(vcpu, HFGWTR_EL2, w_clr, w_set);
 	}
 
+	compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
+	compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
+
 	/* The default to trap everything not handled or supported in KVM. */
 	tmp = HFGxTR_EL2_nAMAIR2_EL1 | HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nS2POR_EL1 |
 	      HFGxTR_EL2_nPOR_EL1 | HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nACCDATA_EL1;
@@ -172,20 +210,26 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	write_sysreg_s(r_val, SYS_HFGRTR_EL2);
 	write_sysreg_s(w_val, SYS_HFGWTR_EL2);
 
-	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
-		return;
-
-	update_fgt_traps(vcpu, HFGITR_EL2);
-	update_fgt_traps(vcpu, HDFGRTR_EL2);
-	update_fgt_traps(vcpu, HDFGWTR_EL2);
+	update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);
+	update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);
+	update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);
 
 	if (cpu_has_amu())
-		update_fgt_traps(vcpu, HAFGRTR_EL2);
+		update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2);
 }
 
+#define __deactivate_fgt(htcxt, vcpu, kvm, reg)				\
+	do {								\
+		if ((vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) ||	\
+		    kvm->arch.fgu[reg_to_fgt_group_id(reg)])		\
+			write_sysreg_s(ctxt_sys_reg(hctxt, reg),	\
+				       SYS_ ## reg);			\
+	} while(0)
+
 static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
+	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
 
 	if (!cpus_have_final_cap(ARM64_HAS_FGT))
 		return;
@@ -193,15 +237,12 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	write_sysreg_s(ctxt_sys_reg(hctxt, HFGRTR_EL2), SYS_HFGRTR_EL2);
 	write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
 
-	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
-		return;
-
-	write_sysreg_s(ctxt_sys_reg(hctxt, HFGITR_EL2), SYS_HFGITR_EL2);
-	write_sysreg_s(ctxt_sys_reg(hctxt, HDFGRTR_EL2), SYS_HDFGRTR_EL2);
-	write_sysreg_s(ctxt_sys_reg(hctxt, HDFGWTR_EL2), SYS_HDFGWTR_EL2);
+	__deactivate_fgt(hctxt, vcpu, kvm, HFGITR_EL2);
+	__deactivate_fgt(hctxt, vcpu, kvm, HDFGRTR_EL2);
+	__deactivate_fgt(hctxt, vcpu, kvm, HDFGWTR_EL2);
 
 	if (cpu_has_amu())
-		write_sysreg_s(ctxt_sys_reg(hctxt, HAFGRTR_EL2), SYS_HAFGRTR_EL2);
+		__deactivate_fgt(hctxt, vcpu, kvm, HAFGRTR_EL2);
 }
 
 static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 19/25] KVM: arm64: Move existing feature disabling over to FGU infrastructure
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

We already trap a bunch of existing features for the purpose of
disabling them (MAIR2, POR, ACCDATA, SME...).

Let's move them over to our brand new FGU infrastructure.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h       |  4 ++++
 arch/arm64/kvm/arm.c                    |  6 ++++++
 arch/arm64/kvm/hyp/include/hyp/switch.h | 17 +++--------------
 arch/arm64/kvm/sys_regs.c               | 23 +++++++++++++++++++++++
 4 files changed, 36 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4e0ac507ca01..fe5ed4bcded0 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -297,6 +297,8 @@ struct kvm_arch {
 #define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE		6
 	/* Initial ID reg values loaded */
 #define KVM_ARCH_FLAG_ID_REGS_INITIALIZED		7
+	/* Fine-Grained UNDEF initialised */
+#define KVM_ARCH_FLAG_FGU_INITIALIZED			8
 	unsigned long flags;
 
 	/* VM-wide vCPU feature set */
@@ -1112,6 +1114,8 @@ int __init populate_nv_trap_config(void);
 bool lock_all_vcpus(struct kvm *kvm);
 void unlock_all_vcpus(struct kvm *kvm);
 
+void kvm_init_sysreg(struct kvm_vcpu *);
+
 /* MMIO helpers */
 void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data);
 unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index c063e84fc72c..9f806c9b7d5d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -675,6 +675,12 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
 			return ret;
 	}
 
+	/*
+	 * This needs to happen after NV has imposed its own restrictions on
+	 * the feature set
+	 */
+	kvm_init_sysreg(vcpu);
+
 	ret = kvm_timer_enable(vcpu);
 	if (ret)
 		return ret;
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a09149fd91ed..245f9c1ca666 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -157,7 +157,7 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
 	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
-	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0, tmp;
+	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0;
 	u64 r_val, w_val;
 
 	CHECK_FGT_MASKS(HFGRTR_EL2);
@@ -174,13 +174,6 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	ctxt_sys_reg(hctxt, HFGRTR_EL2) = read_sysreg_s(SYS_HFGRTR_EL2);
 	ctxt_sys_reg(hctxt, HFGWTR_EL2) = read_sysreg_s(SYS_HFGWTR_EL2);
 
-	if (cpus_have_final_cap(ARM64_SME)) {
-		tmp = HFGxTR_EL2_nSMPRI_EL1_MASK | HFGxTR_EL2_nTPIDR2_EL0_MASK;
-
-		r_clr |= tmp;
-		w_clr |= tmp;
-	}
-
 	/*
 	 * Trap guest writes to TCR_EL1 to prevent it from enabling HA or HD.
 	 */
@@ -195,15 +188,11 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
 	compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
 
-	/* The default to trap everything not handled or supported in KVM. */
-	tmp = HFGxTR_EL2_nAMAIR2_EL1 | HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nS2POR_EL1 |
-	      HFGxTR_EL2_nPOR_EL1 | HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nACCDATA_EL1;
-
-	r_val = __HFGRTR_EL2_nMASK & ~tmp;
+	r_val = __HFGRTR_EL2_nMASK;
 	r_val |= r_set;
 	r_val &= ~r_clr;
 
-	w_val = __HFGWTR_EL2_nMASK & ~tmp;
+	w_val = __HFGWTR_EL2_nMASK;
 	w_val |= w_set;
 	w_val &= ~w_clr;
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c48bc2577162..a62efd8a2959 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3943,6 +3943,29 @@ int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm, struct reg_mask_range *
 	return 0;
 }
 
+void kvm_init_sysreg(struct kvm_vcpu *vcpu)
+{
+	struct kvm *kvm = vcpu->kvm;
+
+	mutex_lock(&kvm->arch.config_lock);
+
+	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
+		goto out;
+
+	kvm->arch.fgu[HFGxTR_GROUP] = (HFGxTR_EL2_nAMAIR2_EL1		|
+				       HFGxTR_EL2_nMAIR2_EL1		|
+				       HFGxTR_EL2_nS2POR_EL1		|
+				       HFGxTR_EL2_nPOR_EL1		|
+				       HFGxTR_EL2_nPOR_EL0		|
+				       HFGxTR_EL2_nACCDATA_EL1		|
+				       HFGxTR_EL2_nSMPRI_EL1_MASK	|
+				       HFGxTR_EL2_nTPIDR2_EL0_MASK);
+
+	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
+out:
+	mutex_unlock(&kvm->arch.config_lock);
+}
+
 int __init kvm_sys_reg_table_init(void)
 {
 	struct sys_reg_params params;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 19/25] KVM: arm64: Move existing feature disabling over to FGU infrastructure
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

We already trap a bunch of existing features for the purpose of
disabling them (MAIR2, POR, ACCDATA, SME...).

Let's move them over to our brand new FGU infrastructure.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h       |  4 ++++
 arch/arm64/kvm/arm.c                    |  6 ++++++
 arch/arm64/kvm/hyp/include/hyp/switch.h | 17 +++--------------
 arch/arm64/kvm/sys_regs.c               | 23 +++++++++++++++++++++++
 4 files changed, 36 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4e0ac507ca01..fe5ed4bcded0 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -297,6 +297,8 @@ struct kvm_arch {
 #define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE		6
 	/* Initial ID reg values loaded */
 #define KVM_ARCH_FLAG_ID_REGS_INITIALIZED		7
+	/* Fine-Grained UNDEF initialised */
+#define KVM_ARCH_FLAG_FGU_INITIALIZED			8
 	unsigned long flags;
 
 	/* VM-wide vCPU feature set */
@@ -1112,6 +1114,8 @@ int __init populate_nv_trap_config(void);
 bool lock_all_vcpus(struct kvm *kvm);
 void unlock_all_vcpus(struct kvm *kvm);
 
+void kvm_init_sysreg(struct kvm_vcpu *);
+
 /* MMIO helpers */
 void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data);
 unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index c063e84fc72c..9f806c9b7d5d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -675,6 +675,12 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
 			return ret;
 	}
 
+	/*
+	 * This needs to happen after NV has imposed its own restrictions on
+	 * the feature set
+	 */
+	kvm_init_sysreg(vcpu);
+
 	ret = kvm_timer_enable(vcpu);
 	if (ret)
 		return ret;
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a09149fd91ed..245f9c1ca666 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -157,7 +157,7 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
 	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
-	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0, tmp;
+	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0;
 	u64 r_val, w_val;
 
 	CHECK_FGT_MASKS(HFGRTR_EL2);
@@ -174,13 +174,6 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	ctxt_sys_reg(hctxt, HFGRTR_EL2) = read_sysreg_s(SYS_HFGRTR_EL2);
 	ctxt_sys_reg(hctxt, HFGWTR_EL2) = read_sysreg_s(SYS_HFGWTR_EL2);
 
-	if (cpus_have_final_cap(ARM64_SME)) {
-		tmp = HFGxTR_EL2_nSMPRI_EL1_MASK | HFGxTR_EL2_nTPIDR2_EL0_MASK;
-
-		r_clr |= tmp;
-		w_clr |= tmp;
-	}
-
 	/*
 	 * Trap guest writes to TCR_EL1 to prevent it from enabling HA or HD.
 	 */
@@ -195,15 +188,11 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
 	compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
 
-	/* The default to trap everything not handled or supported in KVM. */
-	tmp = HFGxTR_EL2_nAMAIR2_EL1 | HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nS2POR_EL1 |
-	      HFGxTR_EL2_nPOR_EL1 | HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nACCDATA_EL1;
-
-	r_val = __HFGRTR_EL2_nMASK & ~tmp;
+	r_val = __HFGRTR_EL2_nMASK;
 	r_val |= r_set;
 	r_val &= ~r_clr;
 
-	w_val = __HFGWTR_EL2_nMASK & ~tmp;
+	w_val = __HFGWTR_EL2_nMASK;
 	w_val |= w_set;
 	w_val &= ~w_clr;
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c48bc2577162..a62efd8a2959 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3943,6 +3943,29 @@ int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm, struct reg_mask_range *
 	return 0;
 }
 
+void kvm_init_sysreg(struct kvm_vcpu *vcpu)
+{
+	struct kvm *kvm = vcpu->kvm;
+
+	mutex_lock(&kvm->arch.config_lock);
+
+	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
+		goto out;
+
+	kvm->arch.fgu[HFGxTR_GROUP] = (HFGxTR_EL2_nAMAIR2_EL1		|
+				       HFGxTR_EL2_nMAIR2_EL1		|
+				       HFGxTR_EL2_nS2POR_EL1		|
+				       HFGxTR_EL2_nPOR_EL1		|
+				       HFGxTR_EL2_nPOR_EL0		|
+				       HFGxTR_EL2_nACCDATA_EL1		|
+				       HFGxTR_EL2_nSMPRI_EL1_MASK	|
+				       HFGxTR_EL2_nTPIDR2_EL0_MASK);
+
+	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
+out:
+	mutex_unlock(&kvm->arch.config_lock);
+}
+
 int __init kvm_sys_reg_table_init(void)
 {
 	struct sys_reg_params params;
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 20/25] KVM: arm64: Streamline save/restore of HFG[RW]TR_EL2
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

The way we save/restore HFG[RW]TR_EL2 can now be simplified, and
the Ampere erratum hack is the only thing that still stands out.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 42 ++++++-------------------
 1 file changed, 9 insertions(+), 33 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 245f9c1ca666..2d5891518006 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -157,8 +157,6 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
 	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
-	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0;
-	u64 r_val, w_val;
 
 	CHECK_FGT_MASKS(HFGRTR_EL2);
 	CHECK_FGT_MASKS(HFGWTR_EL2);
@@ -171,34 +169,10 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	if (!cpus_have_final_cap(ARM64_HAS_FGT))
 		return;
 
-	ctxt_sys_reg(hctxt, HFGRTR_EL2) = read_sysreg_s(SYS_HFGRTR_EL2);
-	ctxt_sys_reg(hctxt, HFGWTR_EL2) = read_sysreg_s(SYS_HFGWTR_EL2);
-
-	/*
-	 * Trap guest writes to TCR_EL1 to prevent it from enabling HA or HD.
-	 */
-	if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
-		w_set |= HFGxTR_EL2_TCR_EL1_MASK;
-
-	if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
-		compute_clr_set(vcpu, HFGRTR_EL2, r_clr, r_set);
-		compute_clr_set(vcpu, HFGWTR_EL2, w_clr, w_set);
-	}
-
-	compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
-	compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
-
-	r_val = __HFGRTR_EL2_nMASK;
-	r_val |= r_set;
-	r_val &= ~r_clr;
-
-	w_val = __HFGWTR_EL2_nMASK;
-	w_val |= w_set;
-	w_val &= ~w_clr;
-
-	write_sysreg_s(r_val, SYS_HFGRTR_EL2);
-	write_sysreg_s(w_val, SYS_HFGWTR_EL2);
-
+	update_fgt_traps(hctxt, vcpu, kvm, HFGRTR_EL2);
+	update_fgt_traps_cs(hctxt, vcpu, kvm, HFGWTR_EL2, 0,
+			    cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38) ?
+			    HFGxTR_EL2_TCR_EL1_MASK : 0);
 	update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);
 	update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);
 	update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);
@@ -223,9 +197,11 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	if (!cpus_have_final_cap(ARM64_HAS_FGT))
 		return;
 
-	write_sysreg_s(ctxt_sys_reg(hctxt, HFGRTR_EL2), SYS_HFGRTR_EL2);
-	write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
-
+	__deactivate_fgt(hctxt, vcpu, kvm, HFGRTR_EL2);
+	if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
+		write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
+	else
+		__deactivate_fgt(hctxt, vcpu, kvm, HFGWTR_EL2);
 	__deactivate_fgt(hctxt, vcpu, kvm, HFGITR_EL2);
 	__deactivate_fgt(hctxt, vcpu, kvm, HDFGRTR_EL2);
 	__deactivate_fgt(hctxt, vcpu, kvm, HDFGWTR_EL2);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 20/25] KVM: arm64: Streamline save/restore of HFG[RW]TR_EL2
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

The way we save/restore HFG[RW]TR_EL2 can now be simplified, and
the Ampere erratum hack is the only thing that still stands out.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 42 ++++++-------------------
 1 file changed, 9 insertions(+), 33 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 245f9c1ca666..2d5891518006 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -157,8 +157,6 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
 	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
-	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0;
-	u64 r_val, w_val;
 
 	CHECK_FGT_MASKS(HFGRTR_EL2);
 	CHECK_FGT_MASKS(HFGWTR_EL2);
@@ -171,34 +169,10 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	if (!cpus_have_final_cap(ARM64_HAS_FGT))
 		return;
 
-	ctxt_sys_reg(hctxt, HFGRTR_EL2) = read_sysreg_s(SYS_HFGRTR_EL2);
-	ctxt_sys_reg(hctxt, HFGWTR_EL2) = read_sysreg_s(SYS_HFGWTR_EL2);
-
-	/*
-	 * Trap guest writes to TCR_EL1 to prevent it from enabling HA or HD.
-	 */
-	if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
-		w_set |= HFGxTR_EL2_TCR_EL1_MASK;
-
-	if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
-		compute_clr_set(vcpu, HFGRTR_EL2, r_clr, r_set);
-		compute_clr_set(vcpu, HFGWTR_EL2, w_clr, w_set);
-	}
-
-	compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
-	compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
-
-	r_val = __HFGRTR_EL2_nMASK;
-	r_val |= r_set;
-	r_val &= ~r_clr;
-
-	w_val = __HFGWTR_EL2_nMASK;
-	w_val |= w_set;
-	w_val &= ~w_clr;
-
-	write_sysreg_s(r_val, SYS_HFGRTR_EL2);
-	write_sysreg_s(w_val, SYS_HFGWTR_EL2);
-
+	update_fgt_traps(hctxt, vcpu, kvm, HFGRTR_EL2);
+	update_fgt_traps_cs(hctxt, vcpu, kvm, HFGWTR_EL2, 0,
+			    cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38) ?
+			    HFGxTR_EL2_TCR_EL1_MASK : 0);
 	update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);
 	update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);
 	update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);
@@ -223,9 +197,11 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 	if (!cpus_have_final_cap(ARM64_HAS_FGT))
 		return;
 
-	write_sysreg_s(ctxt_sys_reg(hctxt, HFGRTR_EL2), SYS_HFGRTR_EL2);
-	write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
-
+	__deactivate_fgt(hctxt, vcpu, kvm, HFGRTR_EL2);
+	if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
+		write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
+	else
+		__deactivate_fgt(hctxt, vcpu, kvm, HFGWTR_EL2);
 	__deactivate_fgt(hctxt, vcpu, kvm, HFGITR_EL2);
 	__deactivate_fgt(hctxt, vcpu, kvm, HDFGRTR_EL2);
 	__deactivate_fgt(hctxt, vcpu, kvm, HDFGWTR_EL2);
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 21/25] KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Outer Shareable and Range TLBI instructions shouldn't be made available
to the guest if they are not advertised. Use FGU to disable those,
and set HCR_EL2.TLBIOS in the case the host doesn't have FGT.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a62efd8a2959..3c939ea4a28f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3949,6 +3949,9 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
 
 	mutex_lock(&kvm->arch.config_lock);
 
+	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
+		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
+
 	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
 		goto out;
 
@@ -3961,6 +3964,32 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
 				       HFGxTR_EL2_nSMPRI_EL1_MASK	|
 				       HFGxTR_EL2_nTPIDR2_EL0_MASK);
 
+	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
+		kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS|
+						HFGITR_EL2_TLBIRVALE1OS	|
+						HFGITR_EL2_TLBIRVAAE1OS	|
+						HFGITR_EL2_TLBIRVAE1OS	|
+						HFGITR_EL2_TLBIVAALE1OS	|
+						HFGITR_EL2_TLBIVALE1OS	|
+						HFGITR_EL2_TLBIVAAE1OS	|
+						HFGITR_EL2_TLBIASIDE1OS	|
+						HFGITR_EL2_TLBIVAE1OS	|
+						HFGITR_EL2_TLBIVMALLE1OS);
+
+	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
+		kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1	|
+						HFGITR_EL2_TLBIRVALE1	|
+						HFGITR_EL2_TLBIRVAAE1	|
+						HFGITR_EL2_TLBIRVAE1	|
+						HFGITR_EL2_TLBIRVAALE1IS|
+						HFGITR_EL2_TLBIRVALE1IS	|
+						HFGITR_EL2_TLBIRVAAE1IS	|
+						HFGITR_EL2_TLBIRVAE1IS	|
+						HFGITR_EL2_TLBIRVAALE1OS|
+						HFGITR_EL2_TLBIRVALE1OS	|
+						HFGITR_EL2_TLBIRVAAE1OS	|
+						HFGITR_EL2_TLBIRVAE1OS);
+
 	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
 out:
 	mutex_unlock(&kvm->arch.config_lock);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 21/25] KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Outer Shareable and Range TLBI instructions shouldn't be made available
to the guest if they are not advertised. Use FGU to disable those,
and set HCR_EL2.TLBIOS in the case the host doesn't have FGT.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a62efd8a2959..3c939ea4a28f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3949,6 +3949,9 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
 
 	mutex_lock(&kvm->arch.config_lock);
 
+	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
+		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
+
 	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
 		goto out;
 
@@ -3961,6 +3964,32 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
 				       HFGxTR_EL2_nSMPRI_EL1_MASK	|
 				       HFGxTR_EL2_nTPIDR2_EL0_MASK);
 
+	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
+		kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS|
+						HFGITR_EL2_TLBIRVALE1OS	|
+						HFGITR_EL2_TLBIRVAAE1OS	|
+						HFGITR_EL2_TLBIRVAE1OS	|
+						HFGITR_EL2_TLBIVAALE1OS	|
+						HFGITR_EL2_TLBIVALE1OS	|
+						HFGITR_EL2_TLBIVAAE1OS	|
+						HFGITR_EL2_TLBIASIDE1OS	|
+						HFGITR_EL2_TLBIVAE1OS	|
+						HFGITR_EL2_TLBIVMALLE1OS);
+
+	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
+		kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1	|
+						HFGITR_EL2_TLBIRVALE1	|
+						HFGITR_EL2_TLBIRVAAE1	|
+						HFGITR_EL2_TLBIRVAE1	|
+						HFGITR_EL2_TLBIRVAALE1IS|
+						HFGITR_EL2_TLBIRVALE1IS	|
+						HFGITR_EL2_TLBIRVAAE1IS	|
+						HFGITR_EL2_TLBIRVAE1IS	|
+						HFGITR_EL2_TLBIRVAALE1OS|
+						HFGITR_EL2_TLBIRVALE1OS	|
+						HFGITR_EL2_TLBIRVAAE1OS	|
+						HFGITR_EL2_TLBIRVAE1OS);
+
 	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
 out:
 	mutex_unlock(&kvm->arch.config_lock);
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 22/25] KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is not advertised to the guest
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

As part of the ongoing effort to honor the guest configuration,
add the necessary checks to make PIR_EL1 and co UNDEF if not
advertised to the guest, and avoid context switching them.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 15 ++++++++++++++-
 arch/arm64/kvm/sys_regs.c                  |  4 ++++
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index bb6b571ec627..b34743292ca7 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -37,6 +37,19 @@ static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt)
 	return kvm_has_mte(kern_hyp_va(vcpu->kvm));
 }
 
+static inline bool ctxt_has_s1pie(struct kvm_cpu_context *ctxt)
+{
+	struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu;
+
+	if (!cpus_have_final_cap(ARM64_HAS_S1PIE))
+		return false;
+
+	if (!vcpu)
+		vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt);
+
+	return kvm_has_feat(kern_hyp_va(vcpu->kvm), ID_AA64MMFR3_EL1, S1PIE, IMP);
+}
+
 static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
 {
 	ctxt_sys_reg(ctxt, SCTLR_EL1)	= read_sysreg_el1(SYS_SCTLR);
@@ -55,7 +68,7 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
 	ctxt_sys_reg(ctxt, CONTEXTIDR_EL1) = read_sysreg_el1(SYS_CONTEXTIDR);
 	ctxt_sys_reg(ctxt, AMAIR_EL1)	= read_sysreg_el1(SYS_AMAIR);
 	ctxt_sys_reg(ctxt, CNTKCTL_EL1)	= read_sysreg_el1(SYS_CNTKCTL);
-	if (cpus_have_final_cap(ARM64_HAS_S1PIE)) {
+	if (ctxt_has_s1pie(ctxt)) {
 		ctxt_sys_reg(ctxt, PIR_EL1)	= read_sysreg_el1(SYS_PIR);
 		ctxt_sys_reg(ctxt, PIRE0_EL1)	= read_sysreg_el1(SYS_PIRE0);
 	}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3c939ea4a28f..bcde43b81755 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3990,6 +3990,10 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
 						HFGITR_EL2_TLBIRVAAE1OS	|
 						HFGITR_EL2_TLBIRVAE1OS);
 
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1PIE, IMP))
+		kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 |
+						HFGxTR_EL2_nPIR_EL1);
+
 	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
 out:
 	mutex_unlock(&kvm->arch.config_lock);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 22/25] KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is not advertised to the guest
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

As part of the ongoing effort to honor the guest configuration,
add the necessary checks to make PIR_EL1 and co UNDEF if not
advertised to the guest, and avoid context switching them.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 15 ++++++++++++++-
 arch/arm64/kvm/sys_regs.c                  |  4 ++++
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index bb6b571ec627..b34743292ca7 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -37,6 +37,19 @@ static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt)
 	return kvm_has_mte(kern_hyp_va(vcpu->kvm));
 }
 
+static inline bool ctxt_has_s1pie(struct kvm_cpu_context *ctxt)
+{
+	struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu;
+
+	if (!cpus_have_final_cap(ARM64_HAS_S1PIE))
+		return false;
+
+	if (!vcpu)
+		vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt);
+
+	return kvm_has_feat(kern_hyp_va(vcpu->kvm), ID_AA64MMFR3_EL1, S1PIE, IMP);
+}
+
 static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
 {
 	ctxt_sys_reg(ctxt, SCTLR_EL1)	= read_sysreg_el1(SYS_SCTLR);
@@ -55,7 +68,7 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
 	ctxt_sys_reg(ctxt, CONTEXTIDR_EL1) = read_sysreg_el1(SYS_CONTEXTIDR);
 	ctxt_sys_reg(ctxt, AMAIR_EL1)	= read_sysreg_el1(SYS_AMAIR);
 	ctxt_sys_reg(ctxt, CNTKCTL_EL1)	= read_sysreg_el1(SYS_CNTKCTL);
-	if (cpus_have_final_cap(ARM64_HAS_S1PIE)) {
+	if (ctxt_has_s1pie(ctxt)) {
 		ctxt_sys_reg(ctxt, PIR_EL1)	= read_sysreg_el1(SYS_PIR);
 		ctxt_sys_reg(ctxt, PIRE0_EL1)	= read_sysreg_el1(SYS_PIRE0);
 	}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3c939ea4a28f..bcde43b81755 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3990,6 +3990,10 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
 						HFGITR_EL2_TLBIRVAAE1OS	|
 						HFGITR_EL2_TLBIRVAE1OS);
 
+	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1PIE, IMP))
+		kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 |
+						HFGxTR_EL2_nPIR_EL1);
+
 	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
 out:
 	mutex_unlock(&kvm->arch.config_lock);
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 23/25] KVM: arm64: Make AMU sysreg UNDEF if FEAT_AMU is not advertised to the guest
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

No AMU? No AMU! IF we see an AMU-related trap, let's turn it into
an UNDEF!

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index bcde43b81755..afe6975fcf5c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3994,6 +3994,10 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
 		kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 |
 						HFGxTR_EL2_nPIR_EL1);
 
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP))
+		kvm->arch.fgu[HAFGRTR_GROUP] |= ~(HAFGRTR_EL2_RES0 |
+						  HAFGRTR_EL2_RES1);
+
 	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
 out:
 	mutex_unlock(&kvm->arch.config_lock);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 23/25] KVM: arm64: Make AMU sysreg UNDEF if FEAT_AMU is not advertised to the guest
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

No AMU? No AMU! IF we see an AMU-related trap, let's turn it into
an UNDEF!

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index bcde43b81755..afe6975fcf5c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3994,6 +3994,10 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
 		kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 |
 						HFGxTR_EL2_nPIR_EL1);
 
+	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP))
+		kvm->arch.fgu[HAFGRTR_GROUP] |= ~(HAFGRTR_EL2_RES0 |
+						  HAFGRTR_EL2_RES1);
+
 	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
 out:
 	mutex_unlock(&kvm->arch.config_lock);
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

We unconditionally enable FEAT_MOPS, which is obviously wrong.

So let's only do that when it is advertised to the guest.
Which means we need to rely on a per-vcpu HCRX_EL2 shadow register.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_arm.h        | 4 +---
 arch/arm64/include/asm/kvm_host.h       | 1 +
 arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +-
 arch/arm64/kvm/sys_regs.c               | 8 ++++++++
 4 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 3c6f8ba1e479..a1769e415d72 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -102,9 +102,7 @@
 #define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
 #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
 
-#define HCRX_GUEST_FLAGS \
-	(HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \
-	 (cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0))
+#define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En)
 #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En)
 
 /* TCR_EL2 Registers bits */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fe5ed4bcded0..22343354db3e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -584,6 +584,7 @@ struct kvm_vcpu_arch {
 
 	/* Values of trap registers for the guest. */
 	u64 hcr_el2;
+	u64 hcrx_el2;
 	u64 mdcr_el2;
 	u64 cptr_el2;
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 2d5891518006..e3fcf8c4d5b4 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -236,7 +236,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 
 	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
-		u64 hcrx = HCRX_GUEST_FLAGS;
+		u64 hcrx = vcpu->arch.hcrx_el2;
 		if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
 			u64 clr = 0, set = 0;
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index afe6975fcf5c..b7977e08e4ef 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3952,6 +3952,14 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
 	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
 		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
 
+	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
+		vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS;
+
+		if (kvm_has_feat(kern_hyp_va(vcpu->kvm),
+				 ID_AA64ISAR2_EL1, MOPS, IMP))
+			vcpu->arch.hcrx_el2 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
+	}
+
 	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
 		goto out;
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

We unconditionally enable FEAT_MOPS, which is obviously wrong.

So let's only do that when it is advertised to the guest.
Which means we need to rely on a per-vcpu HCRX_EL2 shadow register.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_arm.h        | 4 +---
 arch/arm64/include/asm/kvm_host.h       | 1 +
 arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +-
 arch/arm64/kvm/sys_regs.c               | 8 ++++++++
 4 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 3c6f8ba1e479..a1769e415d72 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -102,9 +102,7 @@
 #define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
 #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
 
-#define HCRX_GUEST_FLAGS \
-	(HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \
-	 (cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0))
+#define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En)
 #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En)
 
 /* TCR_EL2 Registers bits */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fe5ed4bcded0..22343354db3e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -584,6 +584,7 @@ struct kvm_vcpu_arch {
 
 	/* Values of trap registers for the guest. */
 	u64 hcr_el2;
+	u64 hcrx_el2;
 	u64 mdcr_el2;
 	u64 cptr_el2;
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 2d5891518006..e3fcf8c4d5b4 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -236,7 +236,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 
 	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
-		u64 hcrx = HCRX_GUEST_FLAGS;
+		u64 hcrx = vcpu->arch.hcrx_el2;
 		if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
 			u64 clr = 0, set = 0;
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index afe6975fcf5c..b7977e08e4ef 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3952,6 +3952,14 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
 	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
 		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
 
+	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
+		vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS;
+
+		if (kvm_has_feat(kern_hyp_va(vcpu->kvm),
+				 ID_AA64ISAR2_EL1, MOPS, IMP))
+			vcpu->arch.hcrx_el2 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
+	}
+
 	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
 		goto out;
 
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 25/25] KVM: arm64: Add debugfs file for guest's ID registers
  2024-01-22 20:18 ` Marc Zyngier
@ 2024-01-22 20:18   ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Debugging ID register setup can be a complicated affair. Give the
kernel hacker a way to dump that state in an easy to parse way.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  3 ++
 arch/arm64/kvm/sys_regs.c         | 81 +++++++++++++++++++++++++++++++
 2 files changed, 84 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 22343354db3e..065e2deb3ea4 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -319,6 +319,9 @@ struct kvm_arch {
 	/* PMCR_EL0.N value for the guest */
 	u8 pmcr_n;
 
+	/* Iterator for idreg debugfs */
+	u8	idreg_debugfs_iter;
+
 	/* Hypercall features firmware registers' descriptor */
 	struct kvm_smccc_features smccc_feat;
 	struct maple_tree smccc_filter;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b7977e08e4ef..3278ba0d0347 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -12,6 +12,7 @@
 #include <linux/bitfield.h>
 #include <linux/bsearch.h>
 #include <linux/cacheinfo.h>
+#include <linux/debugfs.h>
 #include <linux/kvm_host.h>
 #include <linux/mm.h>
 #include <linux/printk.h>
@@ -3423,6 +3424,81 @@ static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
 	return false;
 }
 
+static void *idregs_debug_start(struct seq_file *s, loff_t *pos)
+{
+	struct kvm *kvm = s->private;
+	u8 *iter;
+
+	mutex_lock(&kvm->arch.config_lock);
+
+	iter = &kvm->arch.idreg_debugfs_iter;
+	if (*iter == (u8)~0) {
+		*iter = *pos;
+		if (*iter >= KVM_ARM_ID_REG_NUM)
+			iter = NULL;
+	} else {
+		iter = ERR_PTR(-EBUSY);
+	}
+
+	mutex_unlock(&kvm->arch.config_lock);
+
+	return iter;
+}
+
+static void *idregs_debug_next(struct seq_file *s, void *v, loff_t *pos)
+{
+	struct kvm *kvm = s->private;
+
+	(*pos)++;
+
+	if ((kvm->arch.idreg_debugfs_iter + 1) < KVM_ARM_ID_REG_NUM) {
+		kvm->arch.idreg_debugfs_iter++;
+
+		return &kvm->arch.idreg_debugfs_iter;
+	}
+
+	return NULL;
+}
+
+static void idregs_debug_stop(struct seq_file *s, void *v)
+{
+	struct kvm *kvm = s->private;
+
+	if (IS_ERR(v))
+		return;
+
+	mutex_lock(&kvm->arch.config_lock);
+
+	kvm->arch.idreg_debugfs_iter = ~0;
+
+	mutex_unlock(&kvm->arch.config_lock);
+}
+
+static int idregs_debug_show(struct seq_file *s, void *v)
+{
+	struct kvm *kvm = s->private;
+	const struct sys_reg_desc *desc;
+
+	desc = first_idreg + kvm->arch.idreg_debugfs_iter;
+
+	if (!desc->name)
+		return 0;
+
+	seq_printf(s, "%20s:\t%016llx\n",
+		   desc->name, IDREG(kvm, IDX_IDREG(kvm->arch.idreg_debugfs_iter)));
+
+	return 0;
+}
+
+static const struct seq_operations idregs_debug_sops = {
+	.start	= idregs_debug_start,
+	.next	= idregs_debug_next,
+	.stop	= idregs_debug_stop,
+	.show	= idregs_debug_show,
+};
+
+DEFINE_SEQ_ATTRIBUTE(idregs_debug);
+
 static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
 {
 	const struct sys_reg_desc *idreg = first_idreg;
@@ -3442,6 +3518,11 @@ static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
 		id = reg_to_encoding(idreg);
 	}
 
+	kvm->arch.idreg_debugfs_iter = ~0;
+
+	debugfs_create_file("idregs", 0444, kvm->debugfs_dentry, kvm,
+			    &idregs_debug_fops);
+
 	set_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags);
 }
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 114+ messages in thread

* [PATCH 25/25] KVM: arm64: Add debugfs file for guest's ID registers
@ 2024-01-22 20:18   ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-22 20:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

Debugging ID register setup can be a complicated affair. Give the
kernel hacker a way to dump that state in an easy to parse way.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  3 ++
 arch/arm64/kvm/sys_regs.c         | 81 +++++++++++++++++++++++++++++++
 2 files changed, 84 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 22343354db3e..065e2deb3ea4 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -319,6 +319,9 @@ struct kvm_arch {
 	/* PMCR_EL0.N value for the guest */
 	u8 pmcr_n;
 
+	/* Iterator for idreg debugfs */
+	u8	idreg_debugfs_iter;
+
 	/* Hypercall features firmware registers' descriptor */
 	struct kvm_smccc_features smccc_feat;
 	struct maple_tree smccc_filter;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b7977e08e4ef..3278ba0d0347 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -12,6 +12,7 @@
 #include <linux/bitfield.h>
 #include <linux/bsearch.h>
 #include <linux/cacheinfo.h>
+#include <linux/debugfs.h>
 #include <linux/kvm_host.h>
 #include <linux/mm.h>
 #include <linux/printk.h>
@@ -3423,6 +3424,81 @@ static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
 	return false;
 }
 
+static void *idregs_debug_start(struct seq_file *s, loff_t *pos)
+{
+	struct kvm *kvm = s->private;
+	u8 *iter;
+
+	mutex_lock(&kvm->arch.config_lock);
+
+	iter = &kvm->arch.idreg_debugfs_iter;
+	if (*iter == (u8)~0) {
+		*iter = *pos;
+		if (*iter >= KVM_ARM_ID_REG_NUM)
+			iter = NULL;
+	} else {
+		iter = ERR_PTR(-EBUSY);
+	}
+
+	mutex_unlock(&kvm->arch.config_lock);
+
+	return iter;
+}
+
+static void *idregs_debug_next(struct seq_file *s, void *v, loff_t *pos)
+{
+	struct kvm *kvm = s->private;
+
+	(*pos)++;
+
+	if ((kvm->arch.idreg_debugfs_iter + 1) < KVM_ARM_ID_REG_NUM) {
+		kvm->arch.idreg_debugfs_iter++;
+
+		return &kvm->arch.idreg_debugfs_iter;
+	}
+
+	return NULL;
+}
+
+static void idregs_debug_stop(struct seq_file *s, void *v)
+{
+	struct kvm *kvm = s->private;
+
+	if (IS_ERR(v))
+		return;
+
+	mutex_lock(&kvm->arch.config_lock);
+
+	kvm->arch.idreg_debugfs_iter = ~0;
+
+	mutex_unlock(&kvm->arch.config_lock);
+}
+
+static int idregs_debug_show(struct seq_file *s, void *v)
+{
+	struct kvm *kvm = s->private;
+	const struct sys_reg_desc *desc;
+
+	desc = first_idreg + kvm->arch.idreg_debugfs_iter;
+
+	if (!desc->name)
+		return 0;
+
+	seq_printf(s, "%20s:\t%016llx\n",
+		   desc->name, IDREG(kvm, IDX_IDREG(kvm->arch.idreg_debugfs_iter)));
+
+	return 0;
+}
+
+static const struct seq_operations idregs_debug_sops = {
+	.start	= idregs_debug_start,
+	.next	= idregs_debug_next,
+	.stop	= idregs_debug_stop,
+	.show	= idregs_debug_show,
+};
+
+DEFINE_SEQ_ATTRIBUTE(idregs_debug);
+
 static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
 {
 	const struct sys_reg_desc *idreg = first_idreg;
@@ -3442,6 +3518,11 @@ static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
 		id = reg_to_encoding(idreg);
 	}
 
+	kvm->arch.idreg_debugfs_iter = ~0;
+
+	debugfs_create_file("idregs", 0444, kvm->debugfs_dentry, kvm,
+			    &idregs_debug_fops);
+
 	set_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags);
 }
 
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* Re: [PATCH 01/25] arm64: sysreg: Add missing ID_AA64ISAR[13]_EL1 fields and variants
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-22 21:29     ` Mark Brown
  -1 siblings, 0 replies; 114+ messages in thread
From: Mark Brown @ 2024-01-22 21:29 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Joey Gouly

[-- Attachment #1: Type: text/plain, Size: 1229 bytes --]

On Mon, Jan 22, 2024 at 08:18:28PM +0000, Marc Zyngier wrote:

> Despite having the control bits for FEAT_SPECRES and FEAT_PACM,
> the ID registers fields are either incomplete or missing.

In general the registers tend to get updated register by register so
it's not 100% surprising to get partial definition of features, though
more usually it's the ID register that's there.

> Fix it.

Checking against DDI0601 2023-12 the additions here look correct with
the proviso below, I didn't audit for other missing defintions in the
two affected registers (ID_AA64ISAR[13]_EL1).

Reviewed-by: Mark Brown <broonie@kernel.org>

> ---
>  arch/arm64/tools/sysreg | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index fa3fe0856880..53daaaef46cb 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -1366,6 +1366,7 @@ EndEnum
>  UnsignedEnum	43:40	SPECRES
>  	0b0000	NI
>  	0b0001	IMP
> +	0b0010	COSP_RCTX

This is OK in itself but there's also a copy of this in ID_ISAR6_EL1
which needs updating too (and presumably ought to be considered by the
hypervisor, I didn't look at anything else yet).

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 01/25] arm64: sysreg: Add missing ID_AA64ISAR[13]_EL1 fields and variants
@ 2024-01-22 21:29     ` Mark Brown
  0 siblings, 0 replies; 114+ messages in thread
From: Mark Brown @ 2024-01-22 21:29 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Joey Gouly


[-- Attachment #1.1: Type: text/plain, Size: 1229 bytes --]

On Mon, Jan 22, 2024 at 08:18:28PM +0000, Marc Zyngier wrote:

> Despite having the control bits for FEAT_SPECRES and FEAT_PACM,
> the ID registers fields are either incomplete or missing.

In general the registers tend to get updated register by register so
it's not 100% surprising to get partial definition of features, though
more usually it's the ID register that's there.

> Fix it.

Checking against DDI0601 2023-12 the additions here look correct with
the proviso below, I didn't audit for other missing defintions in the
two affected registers (ID_AA64ISAR[13]_EL1).

Reviewed-by: Mark Brown <broonie@kernel.org>

> ---
>  arch/arm64/tools/sysreg | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index fa3fe0856880..53daaaef46cb 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -1366,6 +1366,7 @@ EndEnum
>  UnsignedEnum	43:40	SPECRES
>  	0b0000	NI
>  	0b0001	IMP
> +	0b0010	COSP_RCTX

This is OK in itself but there's also a copy of this in ID_ISAR6_EL1
which needs updating too (and presumably ought to be considered by the
hypervisor, I didn't look at anything else yet).

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 22/25] KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is not advertised to the guest
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-23 11:48     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-23 11:48 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Hello,

On Mon, Jan 22, 2024 at 08:18:49PM +0000, Marc Zyngier wrote:
> As part of the ongoing effort to honor the guest configuration,
> add the necessary checks to make PIR_EL1 and co UNDEF if not
> advertised to the guest, and avoid context switching them.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 15 ++++++++++++++-
>  arch/arm64/kvm/sys_regs.c                  |  4 ++++
>  2 files changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> index bb6b571ec627..b34743292ca7 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> @@ -37,6 +37,19 @@ static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt)
>  	return kvm_has_mte(kern_hyp_va(vcpu->kvm));
>  }
>  
> +static inline bool ctxt_has_s1pie(struct kvm_cpu_context *ctxt)
> +{
> +	struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu;
> +
> +	if (!cpus_have_final_cap(ARM64_HAS_S1PIE))
> +		return false;
> +
> +	if (!vcpu)
> +		vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt);
> +
> +	return kvm_has_feat(kern_hyp_va(vcpu->kvm), ID_AA64MMFR3_EL1, S1PIE, IMP);
> +}
> +
>  static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
>  {
>  	ctxt_sys_reg(ctxt, SCTLR_EL1)	= read_sysreg_el1(SYS_SCTLR);
> @@ -55,7 +68,7 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
>  	ctxt_sys_reg(ctxt, CONTEXTIDR_EL1) = read_sysreg_el1(SYS_CONTEXTIDR);
>  	ctxt_sys_reg(ctxt, AMAIR_EL1)	= read_sysreg_el1(SYS_AMAIR);
>  	ctxt_sys_reg(ctxt, CNTKCTL_EL1)	= read_sysreg_el1(SYS_CNTKCTL);
> -	if (cpus_have_final_cap(ARM64_HAS_S1PIE)) {
> +	if (ctxt_has_s1pie(ctxt)) {
>  		ctxt_sys_reg(ctxt, PIR_EL1)	= read_sysreg_el1(SYS_PIR);
>  		ctxt_sys_reg(ctxt, PIRE0_EL1)	= read_sysreg_el1(SYS_PIRE0);
>  	}

Missing the corresponding change in __sysreg_restore_el1_state().

> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 3c939ea4a28f..bcde43b81755 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3990,6 +3990,10 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
>  						HFGITR_EL2_TLBIRVAAE1OS	|
>  						HFGITR_EL2_TLBIRVAE1OS);
>  
> +	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1PIE, IMP))
> +		kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 |
> +						HFGxTR_EL2_nPIR_EL1);
> +
>  	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
>  out:
>  	mutex_unlock(&kvm->arch.config_lock);

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 22/25] KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is not advertised to the guest
@ 2024-01-23 11:48     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-23 11:48 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Hello,

On Mon, Jan 22, 2024 at 08:18:49PM +0000, Marc Zyngier wrote:
> As part of the ongoing effort to honor the guest configuration,
> add the necessary checks to make PIR_EL1 and co UNDEF if not
> advertised to the guest, and avoid context switching them.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 15 ++++++++++++++-
>  arch/arm64/kvm/sys_regs.c                  |  4 ++++
>  2 files changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> index bb6b571ec627..b34743292ca7 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> @@ -37,6 +37,19 @@ static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt)
>  	return kvm_has_mte(kern_hyp_va(vcpu->kvm));
>  }
>  
> +static inline bool ctxt_has_s1pie(struct kvm_cpu_context *ctxt)
> +{
> +	struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu;
> +
> +	if (!cpus_have_final_cap(ARM64_HAS_S1PIE))
> +		return false;
> +
> +	if (!vcpu)
> +		vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt);
> +
> +	return kvm_has_feat(kern_hyp_va(vcpu->kvm), ID_AA64MMFR3_EL1, S1PIE, IMP);
> +}
> +
>  static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
>  {
>  	ctxt_sys_reg(ctxt, SCTLR_EL1)	= read_sysreg_el1(SYS_SCTLR);
> @@ -55,7 +68,7 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
>  	ctxt_sys_reg(ctxt, CONTEXTIDR_EL1) = read_sysreg_el1(SYS_CONTEXTIDR);
>  	ctxt_sys_reg(ctxt, AMAIR_EL1)	= read_sysreg_el1(SYS_AMAIR);
>  	ctxt_sys_reg(ctxt, CNTKCTL_EL1)	= read_sysreg_el1(SYS_CNTKCTL);
> -	if (cpus_have_final_cap(ARM64_HAS_S1PIE)) {
> +	if (ctxt_has_s1pie(ctxt)) {
>  		ctxt_sys_reg(ctxt, PIR_EL1)	= read_sysreg_el1(SYS_PIR);
>  		ctxt_sys_reg(ctxt, PIRE0_EL1)	= read_sysreg_el1(SYS_PIRE0);
>  	}

Missing the corresponding change in __sysreg_restore_el1_state().

> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 3c939ea4a28f..bcde43b81755 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3990,6 +3990,10 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
>  						HFGITR_EL2_TLBIRVAAE1OS	|
>  						HFGITR_EL2_TLBIRVAE1OS);
>  
> +	if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1PIE, IMP))
> +		kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 |
> +						HFGxTR_EL2_nPIR_EL1);
> +
>  	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
>  out:
>  	mutex_unlock(&kvm->arch.config_lock);

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 03/25] KVM: arm64: nv: Add sanitising to VNCR-backed sysregs
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-23 13:48     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-23 13:48 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Hi,

On Mon, Jan 22, 2024 at 08:18:30PM +0000, Marc Zyngier wrote:
> VNCR-backed "registers" are actually only memory. Which means that
> there is zero control over what the guest can write, and that it
> is the hypervisor's job to actually sanitise the content of the
> backing store. Yeah, this is fun.
> 
> In order to preserve some form of sanity, add a repainting mechanism
> that makes use of a per-VM set of RES0/RES1 masks, one pair per VNCR
> register. These masks get applied on access to the backing store via
> __vcpu_sys_reg(), ensuring that the state that is consumed by KVM is
> correct.
> 
> So far, nothing populates these masks, but stay tuned.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 25 +++++++++++++++++++
>  arch/arm64/kvm/arm.c              |  1 +
>  arch/arm64/kvm/nested.c           | 41 ++++++++++++++++++++++++++++++-
>  3 files changed, 66 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index c0cf9c5f5e8d..fe35c59214ad 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -238,6 +238,8 @@ static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
>  	return index;
>  }
>  
> +struct kvm_sysreg_masks;
> +
>  struct kvm_arch {
>  	struct kvm_s2_mmu mmu;
>  
> @@ -312,6 +314,9 @@ struct kvm_arch {
>  #define KVM_ARM_ID_REG_NUM	(IDREG_IDX(sys_reg(3, 0, 0, 7, 7)) + 1)
>  	u64 id_regs[KVM_ARM_ID_REG_NUM];
>  
> +	/* Masks for VNCR-baked sysregs */
> +	struct kvm_sysreg_masks	*sysreg_masks;
> +
>  	/*
>  	 * For an untrusted host VM, 'pkvm.handle' is used to lookup
>  	 * the associated pKVM instance in the hypervisor.
> @@ -474,6 +479,13 @@ enum vcpu_sysreg {
>  	NR_SYS_REGS	/* Nothing after this line! */
>  };
>  
> +struct kvm_sysreg_masks {
> +	struct {
> +		u64	res0;
> +		u64	res1;
> +	} mask[NR_SYS_REGS - __VNCR_START__];
> +};
> +
>  struct kvm_cpu_context {
>  	struct user_pt_regs regs;	/* sp = sp_el0 */
>  
> @@ -868,7 +880,20 @@ static inline u64 *__ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r)
>  
>  #define ctxt_sys_reg(c,r)	(*__ctxt_sys_reg(c,r))
>  
> +#if defined (__KVM_NVHE_HYPERVISOR__)
>  #define __vcpu_sys_reg(v,r)	(ctxt_sys_reg(&(v)->arch.ctxt, (r)))
> +#else
> +u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *, enum vcpu_sysreg);
> +#define __vcpu_sys_reg(v,r)						\
> +	(*({								\
> +		const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt;	\
> +		u64 *__r = __ctxt_sys_reg(ctxt, (r));			\
> +		if (unlikely(cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) && \
> +			     r >= __VNCR_START__ && ctxt->vncr_array))	\
> +			*__r = kvm_vcpu_sanitise_vncr_reg((v), (r));	\
> +		__r;							\
> +	}))
> +#endif

Can you not use vcpu_has_nv() here? I see that __ctxt_sys_reg() does a similar
check, but vcpu_has_nv() covers !__KVM_NVHE_HYPERVISOR__, ARM64_HAS_NESTED_VIRT
and KVM_ARM_VCPU_HAS_EL2 (which I guess is what the ctxt->vncr_array check is
doing?) I can see it's defined in kvm_nested.h, which includes kvm_host.h, so
maybe that's an issue.

#define __vcpu_sys_reg(v,r)						\
	(*({								\
		const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt;	\
		u64 *__r = __ctxt_sys_reg(ctxt, (r));			\
		if (unlikely(vcpu_has_nv(v) && r >= __VNCR_START__))	\
			*__r = kvm_vcpu_sanitise_vncr_reg((v), (r));	\
		__r;							\
	}))

And since vcpu_has_nv() already checks __KVM_NVHE_HYPERVISOR__, you don't need
to define __vcpu_sys_reg() twice.

Also maybe move that derefence into the macro, like: *__r;, instead of being
after the first (.

I'm not sure about the ctxt->vncr_array check, so maybe that's still important.

>  
>  u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg);
>  void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index a25265aca432..c063e84fc72c 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -206,6 +206,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
>  		pkvm_destroy_hyp_vm(kvm);
>  
>  	kfree(kvm->arch.mpidr_data);
> +	kfree(kvm->arch.sysreg_masks);
>  	kvm_destroy_vcpus(kvm);
>  
>  	kvm_unshare_hyp(kvm, kvm + 1);
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index d55e809e26cb..c976cd4b8379 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -163,15 +163,54 @@ static u64 limit_nv_id_reg(u32 id, u64 val)
>  
>  	return val;
>  }
> +
> +u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
> +{
> +	u64 v = ctxt_sys_reg(&vcpu->arch.ctxt, sr);
> +	struct kvm_sysreg_masks *masks;
> +
> +	masks = vcpu->kvm->arch.sysreg_masks;
> +
> +	if (masks) {
> +		sr -= __VNCR_START__;
> +
> +		v &= ~masks->mask[sr].res0;
> +		v |= masks->mask[sr].res1;
> +	}
> +
> +	return v;
> +}
> +
> +static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> +{
> +	int i = sr - __VNCR_START__;
> +
> +	kvm->arch.sysreg_masks->mask[i].res0 = res0;
> +	kvm->arch.sysreg_masks->mask[i].res1 = res1;
> +}
> +
>  int kvm_init_nv_sysregs(struct kvm *kvm)
>  {
> +	int ret = 0;
> +
>  	mutex_lock(&kvm->arch.config_lock);
>  
> +	if (kvm->arch.sysreg_masks)
> +		goto out;
> +
> +	kvm->arch.sysreg_masks = kzalloc(sizeof(*(kvm->arch.sysreg_masks)),
> +					 GFP_KERNEL);
> +	if (!kvm->arch.sysreg_masks) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
>  	for (int i = 0; i < KVM_ARM_ID_REG_NUM; i++)
>  		kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
>  						       kvm->arch.id_regs[i]);
>  
> +out:
>  	mutex_unlock(&kvm->arch.config_lock);
>  
> -	return 0;
> +	return ret;
>  }

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 03/25] KVM: arm64: nv: Add sanitising to VNCR-backed sysregs
@ 2024-01-23 13:48     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-23 13:48 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Hi,

On Mon, Jan 22, 2024 at 08:18:30PM +0000, Marc Zyngier wrote:
> VNCR-backed "registers" are actually only memory. Which means that
> there is zero control over what the guest can write, and that it
> is the hypervisor's job to actually sanitise the content of the
> backing store. Yeah, this is fun.
> 
> In order to preserve some form of sanity, add a repainting mechanism
> that makes use of a per-VM set of RES0/RES1 masks, one pair per VNCR
> register. These masks get applied on access to the backing store via
> __vcpu_sys_reg(), ensuring that the state that is consumed by KVM is
> correct.
> 
> So far, nothing populates these masks, but stay tuned.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 25 +++++++++++++++++++
>  arch/arm64/kvm/arm.c              |  1 +
>  arch/arm64/kvm/nested.c           | 41 ++++++++++++++++++++++++++++++-
>  3 files changed, 66 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index c0cf9c5f5e8d..fe35c59214ad 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -238,6 +238,8 @@ static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
>  	return index;
>  }
>  
> +struct kvm_sysreg_masks;
> +
>  struct kvm_arch {
>  	struct kvm_s2_mmu mmu;
>  
> @@ -312,6 +314,9 @@ struct kvm_arch {
>  #define KVM_ARM_ID_REG_NUM	(IDREG_IDX(sys_reg(3, 0, 0, 7, 7)) + 1)
>  	u64 id_regs[KVM_ARM_ID_REG_NUM];
>  
> +	/* Masks for VNCR-baked sysregs */
> +	struct kvm_sysreg_masks	*sysreg_masks;
> +
>  	/*
>  	 * For an untrusted host VM, 'pkvm.handle' is used to lookup
>  	 * the associated pKVM instance in the hypervisor.
> @@ -474,6 +479,13 @@ enum vcpu_sysreg {
>  	NR_SYS_REGS	/* Nothing after this line! */
>  };
>  
> +struct kvm_sysreg_masks {
> +	struct {
> +		u64	res0;
> +		u64	res1;
> +	} mask[NR_SYS_REGS - __VNCR_START__];
> +};
> +
>  struct kvm_cpu_context {
>  	struct user_pt_regs regs;	/* sp = sp_el0 */
>  
> @@ -868,7 +880,20 @@ static inline u64 *__ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r)
>  
>  #define ctxt_sys_reg(c,r)	(*__ctxt_sys_reg(c,r))
>  
> +#if defined (__KVM_NVHE_HYPERVISOR__)
>  #define __vcpu_sys_reg(v,r)	(ctxt_sys_reg(&(v)->arch.ctxt, (r)))
> +#else
> +u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *, enum vcpu_sysreg);
> +#define __vcpu_sys_reg(v,r)						\
> +	(*({								\
> +		const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt;	\
> +		u64 *__r = __ctxt_sys_reg(ctxt, (r));			\
> +		if (unlikely(cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) && \
> +			     r >= __VNCR_START__ && ctxt->vncr_array))	\
> +			*__r = kvm_vcpu_sanitise_vncr_reg((v), (r));	\
> +		__r;							\
> +	}))
> +#endif

Can you not use vcpu_has_nv() here? I see that __ctxt_sys_reg() does a similar
check, but vcpu_has_nv() covers !__KVM_NVHE_HYPERVISOR__, ARM64_HAS_NESTED_VIRT
and KVM_ARM_VCPU_HAS_EL2 (which I guess is what the ctxt->vncr_array check is
doing?) I can see it's defined in kvm_nested.h, which includes kvm_host.h, so
maybe that's an issue.

#define __vcpu_sys_reg(v,r)						\
	(*({								\
		const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt;	\
		u64 *__r = __ctxt_sys_reg(ctxt, (r));			\
		if (unlikely(vcpu_has_nv(v) && r >= __VNCR_START__))	\
			*__r = kvm_vcpu_sanitise_vncr_reg((v), (r));	\
		__r;							\
	}))

And since vcpu_has_nv() already checks __KVM_NVHE_HYPERVISOR__, you don't need
to define __vcpu_sys_reg() twice.

Also maybe move that derefence into the macro, like: *__r;, instead of being
after the first (.

I'm not sure about the ctxt->vncr_array check, so maybe that's still important.

>  
>  u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg);
>  void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index a25265aca432..c063e84fc72c 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -206,6 +206,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
>  		pkvm_destroy_hyp_vm(kvm);
>  
>  	kfree(kvm->arch.mpidr_data);
> +	kfree(kvm->arch.sysreg_masks);
>  	kvm_destroy_vcpus(kvm);
>  
>  	kvm_unshare_hyp(kvm, kvm + 1);
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index d55e809e26cb..c976cd4b8379 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -163,15 +163,54 @@ static u64 limit_nv_id_reg(u32 id, u64 val)
>  
>  	return val;
>  }
> +
> +u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
> +{
> +	u64 v = ctxt_sys_reg(&vcpu->arch.ctxt, sr);
> +	struct kvm_sysreg_masks *masks;
> +
> +	masks = vcpu->kvm->arch.sysreg_masks;
> +
> +	if (masks) {
> +		sr -= __VNCR_START__;
> +
> +		v &= ~masks->mask[sr].res0;
> +		v |= masks->mask[sr].res1;
> +	}
> +
> +	return v;
> +}
> +
> +static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> +{
> +	int i = sr - __VNCR_START__;
> +
> +	kvm->arch.sysreg_masks->mask[i].res0 = res0;
> +	kvm->arch.sysreg_masks->mask[i].res1 = res1;
> +}
> +
>  int kvm_init_nv_sysregs(struct kvm *kvm)
>  {
> +	int ret = 0;
> +
>  	mutex_lock(&kvm->arch.config_lock);
>  
> +	if (kvm->arch.sysreg_masks)
> +		goto out;
> +
> +	kvm->arch.sysreg_masks = kzalloc(sizeof(*(kvm->arch.sysreg_masks)),
> +					 GFP_KERNEL);
> +	if (!kvm->arch.sysreg_masks) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
>  	for (int i = 0; i < KVM_ARM_ID_REG_NUM; i++)
>  		kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
>  						       kvm->arch.id_regs[i]);
>  
> +out:
>  	mutex_unlock(&kvm->arch.config_lock);
>  
> -	return 0;
> +	return ret;
>  }

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 07/25] KVM: arm64: nv: Drop sanitised_sys_reg() helper
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-23 14:01     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-23 14:01 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Hi,

On Mon, Jan 22, 2024 at 08:18:34PM +0000, Marc Zyngier wrote:
> Now that we have the infrastructure to enforce a sanitised register
> value depending on the VM configuration, drop the helper that only
> used the architectural RES0 value.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

> ---
>  arch/arm64/kvm/emulate-nested.c | 22 +++++++---------------
>  1 file changed, 7 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 431fd429932d..7a4a886adb9d 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -1897,14 +1897,6 @@ static bool check_fgt_bit(u64 val, const union trap_config tc)
>  	return ((val >> tc.bit) & 1) == tc.pol;
>  }
>  
> -#define sanitised_sys_reg(vcpu, reg)			\
> -	({						\
> -		u64 __val;				\
> -		__val = __vcpu_sys_reg(vcpu, reg);	\
> -		__val &= ~__ ## reg ## _RES0;		\
> -		(__val);				\
> -	})
> -
>  bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  {
>  	union trap_config tc;
> @@ -1940,25 +1932,25 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  
>  	case HFGxTR_GROUP:
>  		if (is_read)
> -			val = sanitised_sys_reg(vcpu, HFGRTR_EL2);
> +			val = __vcpu_sys_reg(vcpu, HFGRTR_EL2);
>  		else
> -			val = sanitised_sys_reg(vcpu, HFGWTR_EL2);
> +			val = __vcpu_sys_reg(vcpu, HFGWTR_EL2);
>  		break;
>  
>  	case HDFGRTR_GROUP:
>  	case HDFGWTR_GROUP:
>  		if (is_read)
> -			val = sanitised_sys_reg(vcpu, HDFGRTR_EL2);
> +			val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
>  		else
> -			val = sanitised_sys_reg(vcpu, HDFGWTR_EL2);
> +			val = __vcpu_sys_reg(vcpu, HDFGWTR_EL2);
>  		break;
>  
>  	case HAFGRTR_GROUP:
> -		val = sanitised_sys_reg(vcpu, HAFGRTR_EL2);
> +		val = __vcpu_sys_reg(vcpu, HAFGRTR_EL2);
>  		break;
>  
>  	case HFGITR_GROUP:
> -		val = sanitised_sys_reg(vcpu, HFGITR_EL2);
> +		val = __vcpu_sys_reg(vcpu, HFGITR_EL2);
>  		switch (tc.fgf) {
>  			u64 tmp;
>  
> @@ -1966,7 +1958,7 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  			break;
>  
>  		case HCRX_FGTnXS:
> -			tmp = sanitised_sys_reg(vcpu, HCRX_EL2);
> +			tmp = __vcpu_sys_reg(vcpu, HCRX_EL2);
>  			if (tmp & HCRX_EL2_FGTnXS)
>  				tc.fgt = __NO_FGT_GROUP__;
>  		}

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 07/25] KVM: arm64: nv: Drop sanitised_sys_reg() helper
@ 2024-01-23 14:01     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-23 14:01 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Hi,

On Mon, Jan 22, 2024 at 08:18:34PM +0000, Marc Zyngier wrote:
> Now that we have the infrastructure to enforce a sanitised register
> value depending on the VM configuration, drop the helper that only
> used the architectural RES0 value.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

> ---
>  arch/arm64/kvm/emulate-nested.c | 22 +++++++---------------
>  1 file changed, 7 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 431fd429932d..7a4a886adb9d 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -1897,14 +1897,6 @@ static bool check_fgt_bit(u64 val, const union trap_config tc)
>  	return ((val >> tc.bit) & 1) == tc.pol;
>  }
>  
> -#define sanitised_sys_reg(vcpu, reg)			\
> -	({						\
> -		u64 __val;				\
> -		__val = __vcpu_sys_reg(vcpu, reg);	\
> -		__val &= ~__ ## reg ## _RES0;		\
> -		(__val);				\
> -	})
> -
>  bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  {
>  	union trap_config tc;
> @@ -1940,25 +1932,25 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  
>  	case HFGxTR_GROUP:
>  		if (is_read)
> -			val = sanitised_sys_reg(vcpu, HFGRTR_EL2);
> +			val = __vcpu_sys_reg(vcpu, HFGRTR_EL2);
>  		else
> -			val = sanitised_sys_reg(vcpu, HFGWTR_EL2);
> +			val = __vcpu_sys_reg(vcpu, HFGWTR_EL2);
>  		break;
>  
>  	case HDFGRTR_GROUP:
>  	case HDFGWTR_GROUP:
>  		if (is_read)
> -			val = sanitised_sys_reg(vcpu, HDFGRTR_EL2);
> +			val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
>  		else
> -			val = sanitised_sys_reg(vcpu, HDFGWTR_EL2);
> +			val = __vcpu_sys_reg(vcpu, HDFGWTR_EL2);
>  		break;
>  
>  	case HAFGRTR_GROUP:
> -		val = sanitised_sys_reg(vcpu, HAFGRTR_EL2);
> +		val = __vcpu_sys_reg(vcpu, HAFGRTR_EL2);
>  		break;
>  
>  	case HFGITR_GROUP:
> -		val = sanitised_sys_reg(vcpu, HFGITR_EL2);
> +		val = __vcpu_sys_reg(vcpu, HFGITR_EL2);
>  		switch (tc.fgf) {
>  			u64 tmp;
>  
> @@ -1966,7 +1958,7 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  			break;
>  
>  		case HCRX_FGTnXS:
> -			tmp = sanitised_sys_reg(vcpu, HCRX_EL2);
> +			tmp = __vcpu_sys_reg(vcpu, HCRX_EL2);
>  			if (tmp & HCRX_EL2_FGTnXS)
>  				tc.fgt = __NO_FGT_GROUP__;
>  		}

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 08/25] KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-23 14:14     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-23 14:14 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Hi,

On Mon, Jan 22, 2024 at 08:18:35PM +0000, Marc Zyngier wrote:
> There is no reason to have separate FGT group identifiers for
> the debug fine grain trapping. The sole requirement is to provide
> the *names* so that the SR_FGF() macro can do its magic of picking
> the correct bit definition.
> 
> So let's alias HDFGWTR_GROUP and HDFGRTR_GROUP.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/emulate-nested.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 7a4a886adb9d..8a1cfcf553a2 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -1010,7 +1010,7 @@ enum fgt_group_id {
>  	__NO_FGT_GROUP__,
>  	HFGxTR_GROUP,
>  	HDFGRTR_GROUP,
> -	HDFGWTR_GROUP,
> +	HDFGWTR_GROUP = HDFGRTR_GROUP,
>  	HFGITR_GROUP,
>  	HAFGRTR_GROUP,
>  
> @@ -1938,7 +1938,6 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  		break;
>  
>  	case HDFGRTR_GROUP:
> -	case HDFGWTR_GROUP:
>  		if (is_read)
>  			val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
>  		else

I guess you could rename it to HDFGxTR_GROUP like for HFGxTR_GROUP but that
means changing all those tables, so I think it's fine.

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 08/25] KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers
@ 2024-01-23 14:14     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-23 14:14 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Hi,

On Mon, Jan 22, 2024 at 08:18:35PM +0000, Marc Zyngier wrote:
> There is no reason to have separate FGT group identifiers for
> the debug fine grain trapping. The sole requirement is to provide
> the *names* so that the SR_FGF() macro can do its magic of picking
> the correct bit definition.
> 
> So let's alias HDFGWTR_GROUP and HDFGRTR_GROUP.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/emulate-nested.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 7a4a886adb9d..8a1cfcf553a2 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -1010,7 +1010,7 @@ enum fgt_group_id {
>  	__NO_FGT_GROUP__,
>  	HFGxTR_GROUP,
>  	HDFGRTR_GROUP,
> -	HDFGWTR_GROUP,
> +	HDFGWTR_GROUP = HDFGRTR_GROUP,
>  	HFGITR_GROUP,
>  	HAFGRTR_GROUP,
>  
> @@ -1938,7 +1938,6 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  		break;
>  
>  	case HDFGRTR_GROUP:
> -	case HDFGWTR_GROUP:
>  		if (is_read)
>  			val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
>  		else

I guess you could rename it to HDFGxTR_GROUP like for HFGxTR_GROUP but that
means changing all those tables, so I think it's fine.

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 08/25] KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers
  2024-01-23 14:14     ` Joey Gouly
@ 2024-01-23 15:03       ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-23 15:03 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Tue, 23 Jan 2024 14:14:33 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> Hi,
> 
> On Mon, Jan 22, 2024 at 08:18:35PM +0000, Marc Zyngier wrote:
> > There is no reason to have separate FGT group identifiers for
> > the debug fine grain trapping. The sole requirement is to provide
> > the *names* so that the SR_FGF() macro can do its magic of picking
> > the correct bit definition.
> > 
> > So let's alias HDFGWTR_GROUP and HDFGRTR_GROUP.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/emulate-nested.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> > index 7a4a886adb9d..8a1cfcf553a2 100644
> > --- a/arch/arm64/kvm/emulate-nested.c
> > +++ b/arch/arm64/kvm/emulate-nested.c
> > @@ -1010,7 +1010,7 @@ enum fgt_group_id {
> >  	__NO_FGT_GROUP__,
> >  	HFGxTR_GROUP,
> >  	HDFGRTR_GROUP,
> > -	HDFGWTR_GROUP,
> > +	HDFGWTR_GROUP = HDFGRTR_GROUP,
> >  	HFGITR_GROUP,
> >  	HAFGRTR_GROUP,
> >  
> > @@ -1938,7 +1938,6 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
> >  		break;
> >  
> >  	case HDFGRTR_GROUP:
> > -	case HDFGWTR_GROUP:
> >  		if (is_read)
> >  			val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
> >  		else
> 
> I guess you could rename it to HDFGxTR_GROUP like for HFGxTR_GROUP but that
> means changing all those tables, so I think it's fine.

Not just the tables, but also the arch/arm64/tools/sysreg, which
distinguishes between HDFGRTR and HDFGWTR. I'm pretty sure it isn't
worth the hassle...

> Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks!

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 08/25] KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers
@ 2024-01-23 15:03       ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-23 15:03 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Tue, 23 Jan 2024 14:14:33 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> Hi,
> 
> On Mon, Jan 22, 2024 at 08:18:35PM +0000, Marc Zyngier wrote:
> > There is no reason to have separate FGT group identifiers for
> > the debug fine grain trapping. The sole requirement is to provide
> > the *names* so that the SR_FGF() macro can do its magic of picking
> > the correct bit definition.
> > 
> > So let's alias HDFGWTR_GROUP and HDFGRTR_GROUP.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/emulate-nested.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> > index 7a4a886adb9d..8a1cfcf553a2 100644
> > --- a/arch/arm64/kvm/emulate-nested.c
> > +++ b/arch/arm64/kvm/emulate-nested.c
> > @@ -1010,7 +1010,7 @@ enum fgt_group_id {
> >  	__NO_FGT_GROUP__,
> >  	HFGxTR_GROUP,
> >  	HDFGRTR_GROUP,
> > -	HDFGWTR_GROUP,
> > +	HDFGWTR_GROUP = HDFGRTR_GROUP,
> >  	HFGITR_GROUP,
> >  	HAFGRTR_GROUP,
> >  
> > @@ -1938,7 +1938,6 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
> >  		break;
> >  
> >  	case HDFGRTR_GROUP:
> > -	case HDFGWTR_GROUP:
> >  		if (is_read)
> >  			val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
> >  		else
> 
> I guess you could rename it to HDFGxTR_GROUP like for HFGxTR_GROUP but that
> means changing all those tables, so I think it's fine.

Not just the tables, but also the arch/arm64/tools/sysreg, which
distinguishes between HDFGRTR and HDFGWTR. I'm pretty sure it isn't
worth the hassle...

> Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks!

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 10/25] KVM: arm64: nv: Turn encoding ranges into discrete XArray stores
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-23 16:37     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-23 16:37 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:37PM +0000, Marc Zyngier wrote:
> In order to be able to store different values for member of an
> encoding range, replace xa_store_range() calls with discrete
> xa_store() calls and an encoding iterator.
> 
> We end-up using a bit more memory, but we gain some flexibility
> that we will make use of shortly.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/emulate-nested.c | 31 ++++++++++++++++++++++++-------
>  1 file changed, 24 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index ef46c2e45307..59622636b723 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -1757,6 +1757,28 @@ static __init void print_nv_trap_error(const struct encoding_to_trap_config *tc,
>  		err);
>  }
>  
> +static u32 encoding_next(u32 encoding)
> +{
> +	u8 op0, op1, crn, crm, op2;
> +
> +	op0 = sys_reg_Op0(encoding);
> +	op1 = sys_reg_Op1(encoding);
> +	crn = sys_reg_CRn(encoding);
> +	crm = sys_reg_CRm(encoding);
> +	op2 = sys_reg_Op2(encoding);
> +
> +	if (op2 < Op2_mask)
> +		return sys_reg(op0, op1, crn, crm, op2 + 1);
> +	if (crm < CRm_mask)
> +		return sys_reg(op0, op1, crn, crm + 1, 0);
> +	if (crn < CRn_mask)
> +		return sys_reg(op0, op1, crn + 1, 0, 0);
> +	if (op1 < Op1_mask)
> +		return sys_reg(op0, op1 + 1, 0, 0, 0);
> +
> +	return sys_reg(op0 + 1, 0, 0, 0, 0);
> +}

I like this function, aesthetically pleasing!

> +
>  int __init populate_nv_trap_config(void)
>  {
>  	int ret = 0;
> @@ -1775,13 +1797,8 @@ int __init populate_nv_trap_config(void)
>  			ret = -EINVAL;
>  		}
>  
> -		if (cgt->encoding != cgt->end) {
> -			prev = xa_store_range(&sr_forward_xa,
> -					      cgt->encoding, cgt->end,
> -					      xa_mk_value(cgt->tc.val),
> -					      GFP_KERNEL);
> -		} else {
> -			prev = xa_store(&sr_forward_xa, cgt->encoding,
> +		for (u32 enc = cgt->encoding; enc <= cgt->end; enc = encoding_next(enc)) {
> +			prev = xa_store(&sr_forward_xa, enc,
>  					xa_mk_value(cgt->tc.val), GFP_KERNEL);
>  			if (prev && !xa_is_err(prev)) {
>  				ret = -EINVAL;

The error handling looks a bit weird here, the full context:

                for (u32 enc = cgt->encoding; enc <= cgt->end; enc = encoding_next(enc)) {                                                                     
                        prev = xa_store(&sr_forward_xa, enc,                                                                                                   
                                        xa_mk_value(cgt->tc.val), GFP_KERNEL);                                                                                 
                        if (prev && !xa_is_err(prev)) {                                                                                                        
                                ret = -EINVAL;                                                                                                                 
                                print_nv_trap_error(cgt, "Duplicate CGT", ret);                                                                                
                        }                                                                                                                                      
                }                                                                                                                                              
                                                                                                                                                               
                if (xa_is_err(prev)) {                                                                                                                         
                        ret = xa_err(prev);                                                                                                                    
                        print_nv_trap_error(cgt, "Failed CGT insertion", ret);                                                                                 
                }  

I would maybe expect some 'goto's after setting ret? It looks like the ret
would still be returned properly at the end of the function at least.  We also
don't check the return value of xa_store() in the encoding_to_fgt loop further
down, which seems worse as that could affect VMs if some encodings failed to be
stored for some reason.

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 10/25] KVM: arm64: nv: Turn encoding ranges into discrete XArray stores
@ 2024-01-23 16:37     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-23 16:37 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:37PM +0000, Marc Zyngier wrote:
> In order to be able to store different values for member of an
> encoding range, replace xa_store_range() calls with discrete
> xa_store() calls and an encoding iterator.
> 
> We end-up using a bit more memory, but we gain some flexibility
> that we will make use of shortly.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/emulate-nested.c | 31 ++++++++++++++++++++++++-------
>  1 file changed, 24 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index ef46c2e45307..59622636b723 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -1757,6 +1757,28 @@ static __init void print_nv_trap_error(const struct encoding_to_trap_config *tc,
>  		err);
>  }
>  
> +static u32 encoding_next(u32 encoding)
> +{
> +	u8 op0, op1, crn, crm, op2;
> +
> +	op0 = sys_reg_Op0(encoding);
> +	op1 = sys_reg_Op1(encoding);
> +	crn = sys_reg_CRn(encoding);
> +	crm = sys_reg_CRm(encoding);
> +	op2 = sys_reg_Op2(encoding);
> +
> +	if (op2 < Op2_mask)
> +		return sys_reg(op0, op1, crn, crm, op2 + 1);
> +	if (crm < CRm_mask)
> +		return sys_reg(op0, op1, crn, crm + 1, 0);
> +	if (crn < CRn_mask)
> +		return sys_reg(op0, op1, crn + 1, 0, 0);
> +	if (op1 < Op1_mask)
> +		return sys_reg(op0, op1 + 1, 0, 0, 0);
> +
> +	return sys_reg(op0 + 1, 0, 0, 0, 0);
> +}

I like this function, aesthetically pleasing!

> +
>  int __init populate_nv_trap_config(void)
>  {
>  	int ret = 0;
> @@ -1775,13 +1797,8 @@ int __init populate_nv_trap_config(void)
>  			ret = -EINVAL;
>  		}
>  
> -		if (cgt->encoding != cgt->end) {
> -			prev = xa_store_range(&sr_forward_xa,
> -					      cgt->encoding, cgt->end,
> -					      xa_mk_value(cgt->tc.val),
> -					      GFP_KERNEL);
> -		} else {
> -			prev = xa_store(&sr_forward_xa, cgt->encoding,
> +		for (u32 enc = cgt->encoding; enc <= cgt->end; enc = encoding_next(enc)) {
> +			prev = xa_store(&sr_forward_xa, enc,
>  					xa_mk_value(cgt->tc.val), GFP_KERNEL);
>  			if (prev && !xa_is_err(prev)) {
>  				ret = -EINVAL;

The error handling looks a bit weird here, the full context:

                for (u32 enc = cgt->encoding; enc <= cgt->end; enc = encoding_next(enc)) {                                                                     
                        prev = xa_store(&sr_forward_xa, enc,                                                                                                   
                                        xa_mk_value(cgt->tc.val), GFP_KERNEL);                                                                                 
                        if (prev && !xa_is_err(prev)) {                                                                                                        
                                ret = -EINVAL;                                                                                                                 
                                print_nv_trap_error(cgt, "Duplicate CGT", ret);                                                                                
                        }                                                                                                                                      
                }                                                                                                                                              
                                                                                                                                                               
                if (xa_is_err(prev)) {                                                                                                                         
                        ret = xa_err(prev);                                                                                                                    
                        print_nv_trap_error(cgt, "Failed CGT insertion", ret);                                                                                 
                }  

I would maybe expect some 'goto's after setting ret? It looks like the ret
would still be returned properly at the end of the function at least.  We also
don't check the return value of xa_store() in the encoding_to_fgt loop further
down, which seems worse as that could affect VMs if some encodings failed to be
stored for some reason.

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 03/25] KVM: arm64: nv: Add sanitising to VNCR-backed sysregs
  2024-01-23 13:48     ` Joey Gouly
@ 2024-01-23 17:33       ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-23 17:33 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Tue, 23 Jan 2024 13:48:57 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> Hi,
> 
> On Mon, Jan 22, 2024 at 08:18:30PM +0000, Marc Zyngier wrote:
> > VNCR-backed "registers" are actually only memory. Which means that
> > there is zero control over what the guest can write, and that it
> > is the hypervisor's job to actually sanitise the content of the
> > backing store. Yeah, this is fun.
> > 
> > In order to preserve some form of sanity, add a repainting mechanism
> > that makes use of a per-VM set of RES0/RES1 masks, one pair per VNCR
> > register. These masks get applied on access to the backing store via
> > __vcpu_sys_reg(), ensuring that the state that is consumed by KVM is
> > correct.
> > 
> > So far, nothing populates these masks, but stay tuned.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 25 +++++++++++++++++++
> >  arch/arm64/kvm/arm.c              |  1 +
> >  arch/arm64/kvm/nested.c           | 41 ++++++++++++++++++++++++++++++-
> >  3 files changed, 66 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index c0cf9c5f5e8d..fe35c59214ad 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -238,6 +238,8 @@ static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
> >  	return index;
> >  }
> >  
> > +struct kvm_sysreg_masks;
> > +
> >  struct kvm_arch {
> >  	struct kvm_s2_mmu mmu;
> >  
> > @@ -312,6 +314,9 @@ struct kvm_arch {
> >  #define KVM_ARM_ID_REG_NUM	(IDREG_IDX(sys_reg(3, 0, 0, 7, 7)) + 1)
> >  	u64 id_regs[KVM_ARM_ID_REG_NUM];
> >  
> > +	/* Masks for VNCR-baked sysregs */
> > +	struct kvm_sysreg_masks	*sysreg_masks;
> > +
> >  	/*
> >  	 * For an untrusted host VM, 'pkvm.handle' is used to lookup
> >  	 * the associated pKVM instance in the hypervisor.
> > @@ -474,6 +479,13 @@ enum vcpu_sysreg {
> >  	NR_SYS_REGS	/* Nothing after this line! */
> >  };
> >  
> > +struct kvm_sysreg_masks {
> > +	struct {
> > +		u64	res0;
> > +		u64	res1;
> > +	} mask[NR_SYS_REGS - __VNCR_START__];
> > +};
> > +
> >  struct kvm_cpu_context {
> >  	struct user_pt_regs regs;	/* sp = sp_el0 */
> >  
> > @@ -868,7 +880,20 @@ static inline u64 *__ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r)
> >  
> >  #define ctxt_sys_reg(c,r)	(*__ctxt_sys_reg(c,r))
> >  
> > +#if defined (__KVM_NVHE_HYPERVISOR__)
> >  #define __vcpu_sys_reg(v,r)	(ctxt_sys_reg(&(v)->arch.ctxt, (r)))
> > +#else
> > +u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *, enum vcpu_sysreg);
> > +#define __vcpu_sys_reg(v,r)						\
> > +	(*({								\
> > +		const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt;	\
> > +		u64 *__r = __ctxt_sys_reg(ctxt, (r));			\
> > +		if (unlikely(cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) && \
> > +			     r >= __VNCR_START__ && ctxt->vncr_array))	\
> > +			*__r = kvm_vcpu_sanitise_vncr_reg((v), (r));	\
> > +		__r;							\
> > +	}))
> > +#endif
> 
> Can you not use vcpu_has_nv() here? I see that __ctxt_sys_reg() does a similar
> check, but vcpu_has_nv() covers !__KVM_NVHE_HYPERVISOR__, ARM64_HAS_NESTED_VIRT
> and KVM_ARM_VCPU_HAS_EL2 (which I guess is what the ctxt->vncr_array check is
> doing?) I can see it's defined in kvm_nested.h, which includes kvm_host.h, so
> maybe that's an issue.
> 
> #define __vcpu_sys_reg(v,r)						\
> 	(*({								\
> 		const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt;	\
> 		u64 *__r = __ctxt_sys_reg(ctxt, (r));			\
> 		if (unlikely(vcpu_has_nv(v) && r >= __VNCR_START__))	\
> 			*__r = kvm_vcpu_sanitise_vncr_reg((v), (r));	\
> 		__r;							\
> 	}))
> 
> And since vcpu_has_nv() already checks __KVM_NVHE_HYPERVISOR__, you don't need
> to define __vcpu_sys_reg() twice.

All good points. Now that we only cater for NV2, vncr_array not being
NULL is a given, although we still need it in __ctxt_sys_reg() as we
don't have the full-fat vcpu at this stage (and thus cannot check for
flags).

>
> Also maybe move that derefence into the macro, like: *__r;, instead of being
> after the first (.

Surprisingly, this doesn't work:

<quote>
./arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h:240:38: error: lvalue required as left operand of assignment
240 |   __vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);

</quote>

There are plenty more.

> I'm not sure about the ctxt->vncr_array check, so maybe that's still
> important.

In the absence of the flag, it is. And I'm actually tempted to
standardise on checking for vncr_array in vcpu_has_nv() as a
substitute for the flag. It is likely to be a bit cheaper and for the
value to be needed down the line.

I'll rework this shortly.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 03/25] KVM: arm64: nv: Add sanitising to VNCR-backed sysregs
@ 2024-01-23 17:33       ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-23 17:33 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Tue, 23 Jan 2024 13:48:57 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> Hi,
> 
> On Mon, Jan 22, 2024 at 08:18:30PM +0000, Marc Zyngier wrote:
> > VNCR-backed "registers" are actually only memory. Which means that
> > there is zero control over what the guest can write, and that it
> > is the hypervisor's job to actually sanitise the content of the
> > backing store. Yeah, this is fun.
> > 
> > In order to preserve some form of sanity, add a repainting mechanism
> > that makes use of a per-VM set of RES0/RES1 masks, one pair per VNCR
> > register. These masks get applied on access to the backing store via
> > __vcpu_sys_reg(), ensuring that the state that is consumed by KVM is
> > correct.
> > 
> > So far, nothing populates these masks, but stay tuned.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 25 +++++++++++++++++++
> >  arch/arm64/kvm/arm.c              |  1 +
> >  arch/arm64/kvm/nested.c           | 41 ++++++++++++++++++++++++++++++-
> >  3 files changed, 66 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index c0cf9c5f5e8d..fe35c59214ad 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -238,6 +238,8 @@ static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
> >  	return index;
> >  }
> >  
> > +struct kvm_sysreg_masks;
> > +
> >  struct kvm_arch {
> >  	struct kvm_s2_mmu mmu;
> >  
> > @@ -312,6 +314,9 @@ struct kvm_arch {
> >  #define KVM_ARM_ID_REG_NUM	(IDREG_IDX(sys_reg(3, 0, 0, 7, 7)) + 1)
> >  	u64 id_regs[KVM_ARM_ID_REG_NUM];
> >  
> > +	/* Masks for VNCR-baked sysregs */
> > +	struct kvm_sysreg_masks	*sysreg_masks;
> > +
> >  	/*
> >  	 * For an untrusted host VM, 'pkvm.handle' is used to lookup
> >  	 * the associated pKVM instance in the hypervisor.
> > @@ -474,6 +479,13 @@ enum vcpu_sysreg {
> >  	NR_SYS_REGS	/* Nothing after this line! */
> >  };
> >  
> > +struct kvm_sysreg_masks {
> > +	struct {
> > +		u64	res0;
> > +		u64	res1;
> > +	} mask[NR_SYS_REGS - __VNCR_START__];
> > +};
> > +
> >  struct kvm_cpu_context {
> >  	struct user_pt_regs regs;	/* sp = sp_el0 */
> >  
> > @@ -868,7 +880,20 @@ static inline u64 *__ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r)
> >  
> >  #define ctxt_sys_reg(c,r)	(*__ctxt_sys_reg(c,r))
> >  
> > +#if defined (__KVM_NVHE_HYPERVISOR__)
> >  #define __vcpu_sys_reg(v,r)	(ctxt_sys_reg(&(v)->arch.ctxt, (r)))
> > +#else
> > +u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *, enum vcpu_sysreg);
> > +#define __vcpu_sys_reg(v,r)						\
> > +	(*({								\
> > +		const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt;	\
> > +		u64 *__r = __ctxt_sys_reg(ctxt, (r));			\
> > +		if (unlikely(cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) && \
> > +			     r >= __VNCR_START__ && ctxt->vncr_array))	\
> > +			*__r = kvm_vcpu_sanitise_vncr_reg((v), (r));	\
> > +		__r;							\
> > +	}))
> > +#endif
> 
> Can you not use vcpu_has_nv() here? I see that __ctxt_sys_reg() does a similar
> check, but vcpu_has_nv() covers !__KVM_NVHE_HYPERVISOR__, ARM64_HAS_NESTED_VIRT
> and KVM_ARM_VCPU_HAS_EL2 (which I guess is what the ctxt->vncr_array check is
> doing?) I can see it's defined in kvm_nested.h, which includes kvm_host.h, so
> maybe that's an issue.
> 
> #define __vcpu_sys_reg(v,r)						\
> 	(*({								\
> 		const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt;	\
> 		u64 *__r = __ctxt_sys_reg(ctxt, (r));			\
> 		if (unlikely(vcpu_has_nv(v) && r >= __VNCR_START__))	\
> 			*__r = kvm_vcpu_sanitise_vncr_reg((v), (r));	\
> 		__r;							\
> 	}))
> 
> And since vcpu_has_nv() already checks __KVM_NVHE_HYPERVISOR__, you don't need
> to define __vcpu_sys_reg() twice.

All good points. Now that we only cater for NV2, vncr_array not being
NULL is a given, although we still need it in __ctxt_sys_reg() as we
don't have the full-fat vcpu at this stage (and thus cannot check for
flags).

>
> Also maybe move that derefence into the macro, like: *__r;, instead of being
> after the first (.

Surprisingly, this doesn't work:

<quote>
./arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h:240:38: error: lvalue required as left operand of assignment
240 |   __vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2);

</quote>

There are plenty more.

> I'm not sure about the ctxt->vncr_array check, so maybe that's still
> important.

In the absence of the flag, it is. And I'm actually tempted to
standardise on checking for vncr_array in vcpu_has_nv() as a
substitute for the flag. It is likely to be a bit cheaper and for the
value to be needed down the line.

I'll rework this shortly.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 08/25] KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers
  2024-01-23 15:03       ` Marc Zyngier
@ 2024-01-23 17:42         ` Mark Brown
  -1 siblings, 0 replies; 114+ messages in thread
From: Mark Brown @ 2024-01-23 17:42 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Joey Gouly, kvmarm, linux-arm-kernel, James Morse,
	Suzuki K Poulose, Oliver Upton, Zenghui Yu, Catalin Marinas,
	Will Deacon

[-- Attachment #1: Type: text/plain, Size: 845 bytes --]

On Tue, Jan 23, 2024 at 03:03:03PM +0000, Marc Zyngier wrote:
> Joey Gouly <joey.gouly@arm.com> wrote:

> > >  	case HDFGRTR_GROUP:
> > > -	case HDFGWTR_GROUP:
> > >  		if (is_read)
> > >  			val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
> > >  		else

> > I guess you could rename it to HDFGxTR_GROUP like for HFGxTR_GROUP but that
> > means changing all those tables, so I think it's fine.

> Not just the tables, but also the arch/arm64/tools/sysreg, which
> distinguishes between HDFGRTR and HDFGWTR. I'm pretty sure it isn't
> worth the hassle...

Quickly checking the current definitions in the kernel there's
differences between the traps (eg, HDFGRTR_EL2 has bit 63 PMBIDR_EL1 but
that's RES0 for the write registers and there's others).  Some of them
are just no write traps for read only registers but some it's not so
immediately obvious.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 08/25] KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers
@ 2024-01-23 17:42         ` Mark Brown
  0 siblings, 0 replies; 114+ messages in thread
From: Mark Brown @ 2024-01-23 17:42 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Joey Gouly, kvmarm, linux-arm-kernel, James Morse,
	Suzuki K Poulose, Oliver Upton, Zenghui Yu, Catalin Marinas,
	Will Deacon


[-- Attachment #1.1: Type: text/plain, Size: 845 bytes --]

On Tue, Jan 23, 2024 at 03:03:03PM +0000, Marc Zyngier wrote:
> Joey Gouly <joey.gouly@arm.com> wrote:

> > >  	case HDFGRTR_GROUP:
> > > -	case HDFGWTR_GROUP:
> > >  		if (is_read)
> > >  			val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
> > >  		else

> > I guess you could rename it to HDFGxTR_GROUP like for HFGxTR_GROUP but that
> > means changing all those tables, so I think it's fine.

> Not just the tables, but also the arch/arm64/tools/sysreg, which
> distinguishes between HDFGRTR and HDFGWTR. I'm pretty sure it isn't
> worth the hassle...

Quickly checking the current definitions in the kernel there's
differences between the traps (eg, HDFGRTR_EL2 has bit 63 PMBIDR_EL1 but
that's RES0 for the write registers and there's others).  Some of them
are just no write traps for read only registers but some it's not so
immediately obvious.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 10/25] KVM: arm64: nv: Turn encoding ranges into discrete XArray stores
  2024-01-23 16:37     ` Joey Gouly
@ 2024-01-23 17:45       ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-23 17:45 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Tue, 23 Jan 2024 16:37:25 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> On Mon, Jan 22, 2024 at 08:18:37PM +0000, Marc Zyngier wrote:
> > In order to be able to store different values for member of an
> > encoding range, replace xa_store_range() calls with discrete
> > xa_store() calls and an encoding iterator.
> > 
> > We end-up using a bit more memory, but we gain some flexibility
> > that we will make use of shortly.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/emulate-nested.c | 31 ++++++++++++++++++++++++-------
> >  1 file changed, 24 insertions(+), 7 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> > index ef46c2e45307..59622636b723 100644
> > --- a/arch/arm64/kvm/emulate-nested.c
> > +++ b/arch/arm64/kvm/emulate-nested.c
> > @@ -1757,6 +1757,28 @@ static __init void print_nv_trap_error(const struct encoding_to_trap_config *tc,
> >  		err);
> >  }
> >  
> > +static u32 encoding_next(u32 encoding)
> > +{
> > +	u8 op0, op1, crn, crm, op2;
> > +
> > +	op0 = sys_reg_Op0(encoding);
> > +	op1 = sys_reg_Op1(encoding);
> > +	crn = sys_reg_CRn(encoding);
> > +	crm = sys_reg_CRm(encoding);
> > +	op2 = sys_reg_Op2(encoding);
> > +
> > +	if (op2 < Op2_mask)
> > +		return sys_reg(op0, op1, crn, crm, op2 + 1);
> > +	if (crm < CRm_mask)
> > +		return sys_reg(op0, op1, crn, crm + 1, 0);
> > +	if (crn < CRn_mask)
> > +		return sys_reg(op0, op1, crn + 1, 0, 0);
> > +	if (op1 < Op1_mask)
> > +		return sys_reg(op0, op1 + 1, 0, 0, 0);
> > +
> > +	return sys_reg(op0 + 1, 0, 0, 0, 0);
> > +}
> 
> I like this function, aesthetically pleasing!

Glad you like the colour! :D

> 
> > +
> >  int __init populate_nv_trap_config(void)
> >  {
> >  	int ret = 0;
> > @@ -1775,13 +1797,8 @@ int __init populate_nv_trap_config(void)
> >  			ret = -EINVAL;
> >  		}
> >  
> > -		if (cgt->encoding != cgt->end) {
> > -			prev = xa_store_range(&sr_forward_xa,
> > -					      cgt->encoding, cgt->end,
> > -					      xa_mk_value(cgt->tc.val),
> > -					      GFP_KERNEL);
> > -		} else {
> > -			prev = xa_store(&sr_forward_xa, cgt->encoding,
> > +		for (u32 enc = cgt->encoding; enc <= cgt->end; enc = encoding_next(enc)) {
> > +			prev = xa_store(&sr_forward_xa, enc,
> >  					xa_mk_value(cgt->tc.val), GFP_KERNEL);
> >  			if (prev && !xa_is_err(prev)) {
> >  				ret = -EINVAL;
> 
> The error handling looks a bit weird here, the full context:
> 
>                 for (u32 enc = cgt->encoding; enc <= cgt->end; enc = encoding_next(enc)) {
>                         prev = xa_store(&sr_forward_xa, enc,
>                                         xa_mk_value(cgt->tc.val), GFP_KERNEL);
>                         if (prev && !xa_is_err(prev)) {
>                                 ret = -EINVAL;
>                                 print_nv_trap_error(cgt, "Duplicate CGT", ret);
>                         }
>                 }
>
>                 if (xa_is_err(prev)) {
>                         ret = xa_err(prev);
>                         print_nv_trap_error(cgt, "Failed CGT insertion", ret);
>                 }  
> 
> I would maybe expect some 'goto's after setting ret? It looks like the ret
> would still be returned properly at the end of the function at least.  We also
> don't check the return value of xa_store() in the encoding_to_fgt loop further
> down, which seems worse as that could affect VMs if some encodings failed to be
> stored for some reason.

The lack of goto is on purpose. Getting the tables right is tedious
when you can't collect multiple errors at once and stop on the first
one. Which is why I opted for this scheme where 'ret' only gets
written on error.

However, the error handling is pretty lax indeed. I have this in
store, on top of the current patch:

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 5c0f81b6e55c..f2cf0fbf27eb 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1795,11 +1795,11 @@ int __init populate_nv_trap_config(void)
 				ret = -EINVAL;
 				print_nv_trap_error(cgt, "Duplicate CGT", ret);
 			}
-		}
 
-		if (xa_is_err(prev)) {
-			ret = xa_err(prev);
-			print_nv_trap_error(cgt, "Failed CGT insertion", ret);
+			if (xa_is_err(prev)) {
+				ret = xa_err(prev);
+				print_nv_trap_error(cgt, "Failed CGT insertion", ret);
+			}
 		}
 	}
 
@@ -1812,6 +1812,7 @@ int __init populate_nv_trap_config(void)
 	for (int i = 0; i < ARRAY_SIZE(encoding_to_fgt); i++) {
 		const struct encoding_to_trap_config *fgt = &encoding_to_fgt[i];
 		union trap_config tc;
+		void *prev;
 
 		if (fgt->tc.fgt >= __NR_FGT_GROUP_IDS__) {
 			ret = -EINVAL;
@@ -1826,8 +1827,13 @@ int __init populate_nv_trap_config(void)
 		}
 
 		tc.val |= fgt->tc.val;
-		xa_store(&sr_forward_xa, fgt->encoding,
-			 xa_mk_value(tc.val), GFP_KERNEL);
+		prev = xa_store(&sr_forward_xa, fgt->encoding,
+				xa_mk_value(tc.val), GFP_KERNEL);
+
+		if (xa_is_err(prev)) {
+			ret = xa_err(prev);
+			print_nv_trap_error(cgt, "Failed FGT insertion", ret);
+		}
 	}
 
 	kvm_info("nv: %ld fine grained trap handlers\n",

Completely untested, of course.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* Re: [PATCH 10/25] KVM: arm64: nv: Turn encoding ranges into discrete XArray stores
@ 2024-01-23 17:45       ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-23 17:45 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Tue, 23 Jan 2024 16:37:25 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> On Mon, Jan 22, 2024 at 08:18:37PM +0000, Marc Zyngier wrote:
> > In order to be able to store different values for member of an
> > encoding range, replace xa_store_range() calls with discrete
> > xa_store() calls and an encoding iterator.
> > 
> > We end-up using a bit more memory, but we gain some flexibility
> > that we will make use of shortly.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/emulate-nested.c | 31 ++++++++++++++++++++++++-------
> >  1 file changed, 24 insertions(+), 7 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> > index ef46c2e45307..59622636b723 100644
> > --- a/arch/arm64/kvm/emulate-nested.c
> > +++ b/arch/arm64/kvm/emulate-nested.c
> > @@ -1757,6 +1757,28 @@ static __init void print_nv_trap_error(const struct encoding_to_trap_config *tc,
> >  		err);
> >  }
> >  
> > +static u32 encoding_next(u32 encoding)
> > +{
> > +	u8 op0, op1, crn, crm, op2;
> > +
> > +	op0 = sys_reg_Op0(encoding);
> > +	op1 = sys_reg_Op1(encoding);
> > +	crn = sys_reg_CRn(encoding);
> > +	crm = sys_reg_CRm(encoding);
> > +	op2 = sys_reg_Op2(encoding);
> > +
> > +	if (op2 < Op2_mask)
> > +		return sys_reg(op0, op1, crn, crm, op2 + 1);
> > +	if (crm < CRm_mask)
> > +		return sys_reg(op0, op1, crn, crm + 1, 0);
> > +	if (crn < CRn_mask)
> > +		return sys_reg(op0, op1, crn + 1, 0, 0);
> > +	if (op1 < Op1_mask)
> > +		return sys_reg(op0, op1 + 1, 0, 0, 0);
> > +
> > +	return sys_reg(op0 + 1, 0, 0, 0, 0);
> > +}
> 
> I like this function, aesthetically pleasing!

Glad you like the colour! :D

> 
> > +
> >  int __init populate_nv_trap_config(void)
> >  {
> >  	int ret = 0;
> > @@ -1775,13 +1797,8 @@ int __init populate_nv_trap_config(void)
> >  			ret = -EINVAL;
> >  		}
> >  
> > -		if (cgt->encoding != cgt->end) {
> > -			prev = xa_store_range(&sr_forward_xa,
> > -					      cgt->encoding, cgt->end,
> > -					      xa_mk_value(cgt->tc.val),
> > -					      GFP_KERNEL);
> > -		} else {
> > -			prev = xa_store(&sr_forward_xa, cgt->encoding,
> > +		for (u32 enc = cgt->encoding; enc <= cgt->end; enc = encoding_next(enc)) {
> > +			prev = xa_store(&sr_forward_xa, enc,
> >  					xa_mk_value(cgt->tc.val), GFP_KERNEL);
> >  			if (prev && !xa_is_err(prev)) {
> >  				ret = -EINVAL;
> 
> The error handling looks a bit weird here, the full context:
> 
>                 for (u32 enc = cgt->encoding; enc <= cgt->end; enc = encoding_next(enc)) {
>                         prev = xa_store(&sr_forward_xa, enc,
>                                         xa_mk_value(cgt->tc.val), GFP_KERNEL);
>                         if (prev && !xa_is_err(prev)) {
>                                 ret = -EINVAL;
>                                 print_nv_trap_error(cgt, "Duplicate CGT", ret);
>                         }
>                 }
>
>                 if (xa_is_err(prev)) {
>                         ret = xa_err(prev);
>                         print_nv_trap_error(cgt, "Failed CGT insertion", ret);
>                 }  
> 
> I would maybe expect some 'goto's after setting ret? It looks like the ret
> would still be returned properly at the end of the function at least.  We also
> don't check the return value of xa_store() in the encoding_to_fgt loop further
> down, which seems worse as that could affect VMs if some encodings failed to be
> stored for some reason.

The lack of goto is on purpose. Getting the tables right is tedious
when you can't collect multiple errors at once and stop on the first
one. Which is why I opted for this scheme where 'ret' only gets
written on error.

However, the error handling is pretty lax indeed. I have this in
store, on top of the current patch:

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 5c0f81b6e55c..f2cf0fbf27eb 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1795,11 +1795,11 @@ int __init populate_nv_trap_config(void)
 				ret = -EINVAL;
 				print_nv_trap_error(cgt, "Duplicate CGT", ret);
 			}
-		}
 
-		if (xa_is_err(prev)) {
-			ret = xa_err(prev);
-			print_nv_trap_error(cgt, "Failed CGT insertion", ret);
+			if (xa_is_err(prev)) {
+				ret = xa_err(prev);
+				print_nv_trap_error(cgt, "Failed CGT insertion", ret);
+			}
 		}
 	}
 
@@ -1812,6 +1812,7 @@ int __init populate_nv_trap_config(void)
 	for (int i = 0; i < ARRAY_SIZE(encoding_to_fgt); i++) {
 		const struct encoding_to_trap_config *fgt = &encoding_to_fgt[i];
 		union trap_config tc;
+		void *prev;
 
 		if (fgt->tc.fgt >= __NR_FGT_GROUP_IDS__) {
 			ret = -EINVAL;
@@ -1826,8 +1827,13 @@ int __init populate_nv_trap_config(void)
 		}
 
 		tc.val |= fgt->tc.val;
-		xa_store(&sr_forward_xa, fgt->encoding,
-			 xa_mk_value(tc.val), GFP_KERNEL);
+		prev = xa_store(&sr_forward_xa, fgt->encoding,
+				xa_mk_value(tc.val), GFP_KERNEL);
+
+		if (xa_is_err(prev)) {
+			ret = xa_err(prev);
+			print_nv_trap_error(cgt, "Failed FGT insertion", ret);
+		}
 	}
 
 	kvm_info("nv: %ld fine grained trap handlers\n",

Completely untested, of course.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* Re: [PATCH 22/25] KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is not advertised to the guest
  2024-01-23 11:48     ` Joey Gouly
@ 2024-01-23 17:51       ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-23 17:51 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Tue, 23 Jan 2024 11:48:10 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> Hello,
> 
> On Mon, Jan 22, 2024 at 08:18:49PM +0000, Marc Zyngier wrote:
> > As part of the ongoing effort to honor the guest configuration,
> > add the necessary checks to make PIR_EL1 and co UNDEF if not
> > advertised to the guest, and avoid context switching them.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 15 ++++++++++++++-
> >  arch/arm64/kvm/sys_regs.c                  |  4 ++++
> >  2 files changed, 18 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> > index bb6b571ec627..b34743292ca7 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> > @@ -37,6 +37,19 @@ static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt)
> >  	return kvm_has_mte(kern_hyp_va(vcpu->kvm));
> >  }
> >  
> > +static inline bool ctxt_has_s1pie(struct kvm_cpu_context *ctxt)
> > +{
> > +	struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu;
> > +
> > +	if (!cpus_have_final_cap(ARM64_HAS_S1PIE))
> > +		return false;
> > +
> > +	if (!vcpu)
> > +		vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt);
> > +
> > +	return kvm_has_feat(kern_hyp_va(vcpu->kvm), ID_AA64MMFR3_EL1, S1PIE, IMP);
> > +}
> > +
> >  static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
> >  {
> >  	ctxt_sys_reg(ctxt, SCTLR_EL1)	= read_sysreg_el1(SYS_SCTLR);
> > @@ -55,7 +68,7 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
> >  	ctxt_sys_reg(ctxt, CONTEXTIDR_EL1) = read_sysreg_el1(SYS_CONTEXTIDR);
> >  	ctxt_sys_reg(ctxt, AMAIR_EL1)	= read_sysreg_el1(SYS_AMAIR);
> >  	ctxt_sys_reg(ctxt, CNTKCTL_EL1)	= read_sysreg_el1(SYS_CNTKCTL);
> > -	if (cpus_have_final_cap(ARM64_HAS_S1PIE)) {
> > +	if (ctxt_has_s1pie(ctxt)) {
> >  		ctxt_sys_reg(ctxt, PIR_EL1)	= read_sysreg_el1(SYS_PIR);
> >  		ctxt_sys_reg(ctxt, PIRE0_EL1)	= read_sysreg_el1(SYS_PIRE0);
> >  	}
> 
> Missing the corresponding change in __sysreg_restore_el1_state().

Gah. Thanks for spotting it. I'm pretty sure I had it at some point,
and somehow lost it. Probably on another machine somewhere...

Cheers,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 22/25] KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is not advertised to the guest
@ 2024-01-23 17:51       ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-23 17:51 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Tue, 23 Jan 2024 11:48:10 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> Hello,
> 
> On Mon, Jan 22, 2024 at 08:18:49PM +0000, Marc Zyngier wrote:
> > As part of the ongoing effort to honor the guest configuration,
> > add the necessary checks to make PIR_EL1 and co UNDEF if not
> > advertised to the guest, and avoid context switching them.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 15 ++++++++++++++-
> >  arch/arm64/kvm/sys_regs.c                  |  4 ++++
> >  2 files changed, 18 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> > index bb6b571ec627..b34743292ca7 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> > @@ -37,6 +37,19 @@ static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt)
> >  	return kvm_has_mte(kern_hyp_va(vcpu->kvm));
> >  }
> >  
> > +static inline bool ctxt_has_s1pie(struct kvm_cpu_context *ctxt)
> > +{
> > +	struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu;
> > +
> > +	if (!cpus_have_final_cap(ARM64_HAS_S1PIE))
> > +		return false;
> > +
> > +	if (!vcpu)
> > +		vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt);
> > +
> > +	return kvm_has_feat(kern_hyp_va(vcpu->kvm), ID_AA64MMFR3_EL1, S1PIE, IMP);
> > +}
> > +
> >  static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
> >  {
> >  	ctxt_sys_reg(ctxt, SCTLR_EL1)	= read_sysreg_el1(SYS_SCTLR);
> > @@ -55,7 +68,7 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
> >  	ctxt_sys_reg(ctxt, CONTEXTIDR_EL1) = read_sysreg_el1(SYS_CONTEXTIDR);
> >  	ctxt_sys_reg(ctxt, AMAIR_EL1)	= read_sysreg_el1(SYS_AMAIR);
> >  	ctxt_sys_reg(ctxt, CNTKCTL_EL1)	= read_sysreg_el1(SYS_CNTKCTL);
> > -	if (cpus_have_final_cap(ARM64_HAS_S1PIE)) {
> > +	if (ctxt_has_s1pie(ctxt)) {
> >  		ctxt_sys_reg(ctxt, PIR_EL1)	= read_sysreg_el1(SYS_PIR);
> >  		ctxt_sys_reg(ctxt, PIRE0_EL1)	= read_sysreg_el1(SYS_PIRE0);
> >  	}
> 
> Missing the corresponding change in __sysreg_restore_el1_state().

Gah. Thanks for spotting it. I'm pretty sure I had it at some point,
and somehow lost it. Probably on another machine somewhere...

Cheers,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 18/25] KVM: arm64: Propagate and handle Fine-Grained UNDEF bits
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-24 15:53     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 15:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Hello,

On Mon, Jan 22, 2024 at 08:18:45PM +0000, Marc Zyngier wrote:
> In order to correctly honor our FGU bits, they must be converted
> into a set of FGT bits. They get merged as part of the existing
> FGT setting.
> 
> Similarly, the UNDEF injection phase takes place when handling
> the trap.
> 
> This results in a bit of rework in the FGT macros in order to
> help with the code generation, as burying per-CPU accesses in
> macros results in a lot of expansion, not to mention the vcpu->kvm
> access on nvhe (kern_hyp_va() is not optimisation-friendly).
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

> ---
>  arch/arm64/kvm/emulate-nested.c         | 11 ++++
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 81 +++++++++++++++++++------
>  2 files changed, 72 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 539b3913628d..f64d1809fe79 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2011,6 +2011,17 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
>  	if (!tc.val)
>  		goto local;
>  
> +	/*
> +	 * If a sysreg can be trapped using a FGT, first check whether we
> +	 * trap for the purpose of forbidding the feature. In that case,
> +	 * inject an UNDEF.
> +	 */
> +	if (tc.fgt != __NO_FGT_GROUP__ &&
> +	    (vcpu->kvm->arch.fgu[tc.fgt] & BIT(tc.bit))) {
> +		kvm_inject_undefined(vcpu);
> +		return true;
> +	}
> +
>  	/*
>  	 * If we're not nesting, immediately return to the caller, with the
>  	 * sysreg index, should we have it.
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index a038320cdb08..a09149fd91ed 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -79,14 +79,48 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
>  		clr |= ~hfg & __ ## reg ## _nMASK; 			\
>  	} while(0)
>  
> -#define update_fgt_traps_cs(vcpu, reg, clr, set)			\
> +#define reg_to_fgt_group_id(reg)					\
> +	({								\
> +		enum fgt_group_id id;					\
> +		switch(reg) {						\
> +		case HFGRTR_EL2:					\
> +		case HFGWTR_EL2:					\
> +			id = HFGxTR_GROUP;				\
> +			break;						\
> +		case HFGITR_EL2:					\
> +			id = HFGITR_GROUP;				\
> +			break;						\
> +		case HDFGRTR_EL2:					\
> +		case HDFGWTR_EL2:					\
> +			id = HDFGRTR_GROUP;				\
> +			break;						\
> +		case HAFGRTR_EL2:					\
> +			id = HAFGRTR_GROUP;				\
> +			break;						\
> +		default:						\
> +			BUILD_BUG_ON(1);				\
> +		}							\
> +									\
> +		id;							\
> +	})
> +
> +#define compute_undef_clr_set(vcpu, kvm, reg, clr, set)			\
> +	do {								\
> +		u64 hfg = kvm->arch.fgu[reg_to_fgt_group_id(reg)];	\
> +		set |= hfg & __ ## reg ## _MASK;			\
> +		clr |= hfg & __ ## reg ## _nMASK; 			\
> +	} while(0)
> +
> +#define update_fgt_traps_cs(hctxt, vcpu, kvm, reg, clr, set)		\
>  	do {								\
> -		struct kvm_cpu_context *hctxt =				\
> -			&this_cpu_ptr(&kvm_host_data)->host_ctxt;	\
>  		u64 c = 0, s = 0;					\
>  									\
>  		ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg);	\
> -		compute_clr_set(vcpu, reg, c, s);			\
> +		if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))		\
> +			compute_clr_set(vcpu, reg, c, s);		\
> +									\
> +		compute_undef_clr_set(vcpu, kvm, reg, c, s);		\
> +									\
>  		s |= set;						\
>  		c |= clr;						\
>  		if (c || s) {						\
> @@ -97,8 +131,8 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
>  		}							\
>  	} while(0)
>  
> -#define update_fgt_traps(vcpu, reg)		\
> -	update_fgt_traps_cs(vcpu, reg, 0, 0)
> +#define update_fgt_traps(hctxt, vcpu, kvm, reg)		\
> +	update_fgt_traps_cs(hctxt, vcpu, kvm, reg, 0, 0)
>  
>  /*
>   * Validate the fine grain trap masks.
> @@ -122,6 +156,7 @@ static inline bool cpu_has_amu(void)
>  static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> +	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>  	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0, tmp;
>  	u64 r_val, w_val;
>  
> @@ -157,6 +192,9 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  		compute_clr_set(vcpu, HFGWTR_EL2, w_clr, w_set);
>  	}
>  
> +	compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
> +	compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
> +
>  	/* The default to trap everything not handled or supported in KVM. */
>  	tmp = HFGxTR_EL2_nAMAIR2_EL1 | HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nS2POR_EL1 |
>  	      HFGxTR_EL2_nPOR_EL1 | HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nACCDATA_EL1;
> @@ -172,20 +210,26 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  	write_sysreg_s(r_val, SYS_HFGRTR_EL2);
>  	write_sysreg_s(w_val, SYS_HFGWTR_EL2);
>  
> -	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> -		return;
> -
> -	update_fgt_traps(vcpu, HFGITR_EL2);
> -	update_fgt_traps(vcpu, HDFGRTR_EL2);
> -	update_fgt_traps(vcpu, HDFGWTR_EL2);
> +	update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);
> +	update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);
> +	update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);
>  
>  	if (cpu_has_amu())
> -		update_fgt_traps(vcpu, HAFGRTR_EL2);
> +		update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2);
>  }
>  
> +#define __deactivate_fgt(htcxt, vcpu, kvm, reg)				\
> +	do {								\
> +		if ((vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) ||	\
> +		    kvm->arch.fgu[reg_to_fgt_group_id(reg)])		\
> +			write_sysreg_s(ctxt_sys_reg(hctxt, reg),	\
> +				       SYS_ ## reg);			\
> +	} while(0)
> +
>  static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> +	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>  
>  	if (!cpus_have_final_cap(ARM64_HAS_FGT))
>  		return;
> @@ -193,15 +237,12 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  	write_sysreg_s(ctxt_sys_reg(hctxt, HFGRTR_EL2), SYS_HFGRTR_EL2);
>  	write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
>  
> -	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> -		return;
> -
> -	write_sysreg_s(ctxt_sys_reg(hctxt, HFGITR_EL2), SYS_HFGITR_EL2);
> -	write_sysreg_s(ctxt_sys_reg(hctxt, HDFGRTR_EL2), SYS_HDFGRTR_EL2);
> -	write_sysreg_s(ctxt_sys_reg(hctxt, HDFGWTR_EL2), SYS_HDFGWTR_EL2);
> +	__deactivate_fgt(hctxt, vcpu, kvm, HFGITR_EL2);
> +	__deactivate_fgt(hctxt, vcpu, kvm, HDFGRTR_EL2);
> +	__deactivate_fgt(hctxt, vcpu, kvm, HDFGWTR_EL2);
>  
>  	if (cpu_has_amu())
> -		write_sysreg_s(ctxt_sys_reg(hctxt, HAFGRTR_EL2), SYS_HAFGRTR_EL2);
> +		__deactivate_fgt(hctxt, vcpu, kvm, HAFGRTR_EL2);
>  }
>  
>  static inline void __activate_traps_common(struct kvm_vcpu *vcpu)


^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 18/25] KVM: arm64: Propagate and handle Fine-Grained UNDEF bits
@ 2024-01-24 15:53     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 15:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Hello,

On Mon, Jan 22, 2024 at 08:18:45PM +0000, Marc Zyngier wrote:
> In order to correctly honor our FGU bits, they must be converted
> into a set of FGT bits. They get merged as part of the existing
> FGT setting.
> 
> Similarly, the UNDEF injection phase takes place when handling
> the trap.
> 
> This results in a bit of rework in the FGT macros in order to
> help with the code generation, as burying per-CPU accesses in
> macros results in a lot of expansion, not to mention the vcpu->kvm
> access on nvhe (kern_hyp_va() is not optimisation-friendly).
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

> ---
>  arch/arm64/kvm/emulate-nested.c         | 11 ++++
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 81 +++++++++++++++++++------
>  2 files changed, 72 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 539b3913628d..f64d1809fe79 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2011,6 +2011,17 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
>  	if (!tc.val)
>  		goto local;
>  
> +	/*
> +	 * If a sysreg can be trapped using a FGT, first check whether we
> +	 * trap for the purpose of forbidding the feature. In that case,
> +	 * inject an UNDEF.
> +	 */
> +	if (tc.fgt != __NO_FGT_GROUP__ &&
> +	    (vcpu->kvm->arch.fgu[tc.fgt] & BIT(tc.bit))) {
> +		kvm_inject_undefined(vcpu);
> +		return true;
> +	}
> +
>  	/*
>  	 * If we're not nesting, immediately return to the caller, with the
>  	 * sysreg index, should we have it.
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index a038320cdb08..a09149fd91ed 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -79,14 +79,48 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
>  		clr |= ~hfg & __ ## reg ## _nMASK; 			\
>  	} while(0)
>  
> -#define update_fgt_traps_cs(vcpu, reg, clr, set)			\
> +#define reg_to_fgt_group_id(reg)					\
> +	({								\
> +		enum fgt_group_id id;					\
> +		switch(reg) {						\
> +		case HFGRTR_EL2:					\
> +		case HFGWTR_EL2:					\
> +			id = HFGxTR_GROUP;				\
> +			break;						\
> +		case HFGITR_EL2:					\
> +			id = HFGITR_GROUP;				\
> +			break;						\
> +		case HDFGRTR_EL2:					\
> +		case HDFGWTR_EL2:					\
> +			id = HDFGRTR_GROUP;				\
> +			break;						\
> +		case HAFGRTR_EL2:					\
> +			id = HAFGRTR_GROUP;				\
> +			break;						\
> +		default:						\
> +			BUILD_BUG_ON(1);				\
> +		}							\
> +									\
> +		id;							\
> +	})
> +
> +#define compute_undef_clr_set(vcpu, kvm, reg, clr, set)			\
> +	do {								\
> +		u64 hfg = kvm->arch.fgu[reg_to_fgt_group_id(reg)];	\
> +		set |= hfg & __ ## reg ## _MASK;			\
> +		clr |= hfg & __ ## reg ## _nMASK; 			\
> +	} while(0)
> +
> +#define update_fgt_traps_cs(hctxt, vcpu, kvm, reg, clr, set)		\
>  	do {								\
> -		struct kvm_cpu_context *hctxt =				\
> -			&this_cpu_ptr(&kvm_host_data)->host_ctxt;	\
>  		u64 c = 0, s = 0;					\
>  									\
>  		ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg);	\
> -		compute_clr_set(vcpu, reg, c, s);			\
> +		if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))		\
> +			compute_clr_set(vcpu, reg, c, s);		\
> +									\
> +		compute_undef_clr_set(vcpu, kvm, reg, c, s);		\
> +									\
>  		s |= set;						\
>  		c |= clr;						\
>  		if (c || s) {						\
> @@ -97,8 +131,8 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
>  		}							\
>  	} while(0)
>  
> -#define update_fgt_traps(vcpu, reg)		\
> -	update_fgt_traps_cs(vcpu, reg, 0, 0)
> +#define update_fgt_traps(hctxt, vcpu, kvm, reg)		\
> +	update_fgt_traps_cs(hctxt, vcpu, kvm, reg, 0, 0)
>  
>  /*
>   * Validate the fine grain trap masks.
> @@ -122,6 +156,7 @@ static inline bool cpu_has_amu(void)
>  static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> +	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>  	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0, tmp;
>  	u64 r_val, w_val;
>  
> @@ -157,6 +192,9 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  		compute_clr_set(vcpu, HFGWTR_EL2, w_clr, w_set);
>  	}
>  
> +	compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
> +	compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
> +
>  	/* The default to trap everything not handled or supported in KVM. */
>  	tmp = HFGxTR_EL2_nAMAIR2_EL1 | HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nS2POR_EL1 |
>  	      HFGxTR_EL2_nPOR_EL1 | HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nACCDATA_EL1;
> @@ -172,20 +210,26 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  	write_sysreg_s(r_val, SYS_HFGRTR_EL2);
>  	write_sysreg_s(w_val, SYS_HFGWTR_EL2);
>  
> -	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> -		return;
> -
> -	update_fgt_traps(vcpu, HFGITR_EL2);
> -	update_fgt_traps(vcpu, HDFGRTR_EL2);
> -	update_fgt_traps(vcpu, HDFGWTR_EL2);
> +	update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);
> +	update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);
> +	update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);
>  
>  	if (cpu_has_amu())
> -		update_fgt_traps(vcpu, HAFGRTR_EL2);
> +		update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2);
>  }
>  
> +#define __deactivate_fgt(htcxt, vcpu, kvm, reg)				\
> +	do {								\
> +		if ((vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) ||	\
> +		    kvm->arch.fgu[reg_to_fgt_group_id(reg)])		\
> +			write_sysreg_s(ctxt_sys_reg(hctxt, reg),	\
> +				       SYS_ ## reg);			\
> +	} while(0)
> +
>  static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
> +	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>  
>  	if (!cpus_have_final_cap(ARM64_HAS_FGT))
>  		return;
> @@ -193,15 +237,12 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  	write_sysreg_s(ctxt_sys_reg(hctxt, HFGRTR_EL2), SYS_HFGRTR_EL2);
>  	write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
>  
> -	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> -		return;
> -
> -	write_sysreg_s(ctxt_sys_reg(hctxt, HFGITR_EL2), SYS_HFGITR_EL2);
> -	write_sysreg_s(ctxt_sys_reg(hctxt, HDFGRTR_EL2), SYS_HDFGRTR_EL2);
> -	write_sysreg_s(ctxt_sys_reg(hctxt, HDFGWTR_EL2), SYS_HDFGWTR_EL2);
> +	__deactivate_fgt(hctxt, vcpu, kvm, HFGITR_EL2);
> +	__deactivate_fgt(hctxt, vcpu, kvm, HDFGRTR_EL2);
> +	__deactivate_fgt(hctxt, vcpu, kvm, HDFGWTR_EL2);
>  
>  	if (cpu_has_amu())
> -		write_sysreg_s(ctxt_sys_reg(hctxt, HAFGRTR_EL2), SYS_HAFGRTR_EL2);
> +		__deactivate_fgt(hctxt, vcpu, kvm, HAFGRTR_EL2);
>  }
>  
>  static inline void __activate_traps_common(struct kvm_vcpu *vcpu)


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 11/25] KVM: arm64: Drop the requirement for XARRAY_MULTI
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-24 15:57     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 15:57 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:38PM +0000, Marc Zyngier wrote:
> Now that we don't use xa_store_range() anymore, drop the added
> complexity of XARRAY_MULTI for KVM. It is likely still pulled
> in by other bits of the kernel though.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/Kconfig | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
> index 6c3c8ca73e7f..5c2a672c06a8 100644
> --- a/arch/arm64/kvm/Kconfig
> +++ b/arch/arm64/kvm/Kconfig
> @@ -39,7 +39,6 @@ menuconfig KVM
>  	select HAVE_KVM_VCPU_RUN_PID_CHANGE
>  	select SCHED_INFO
>  	select GUEST_PERF_EVENTS if PERF_EVENTS
> -	select XARRAY_MULTI
>  	help
>  	  Support hosting virtualized guest machines.
>  

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 11/25] KVM: arm64: Drop the requirement for XARRAY_MULTI
@ 2024-01-24 15:57     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 15:57 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:38PM +0000, Marc Zyngier wrote:
> Now that we don't use xa_store_range() anymore, drop the added
> complexity of XARRAY_MULTI for KVM. It is likely still pulled
> in by other bits of the kernel though.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/Kconfig | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
> index 6c3c8ca73e7f..5c2a672c06a8 100644
> --- a/arch/arm64/kvm/Kconfig
> +++ b/arch/arm64/kvm/Kconfig
> @@ -39,7 +39,6 @@ menuconfig KVM
>  	select HAVE_KVM_VCPU_RUN_PID_CHANGE
>  	select SCHED_INFO
>  	select GUEST_PERF_EVENTS if PERF_EVENTS
> -	select XARRAY_MULTI
>  	help
>  	  Support hosting virtualized guest machines.
>  

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 12/25] KVM: arm64: nv: Move system instructions to their own sys_reg_desc array
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-24 16:23     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 16:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Salut,

On Mon, Jan 22, 2024 at 08:18:39PM +0000, Marc Zyngier wrote:
> As NV results in a bunch of system instructions being trapped, it makes
> sense to pull the system instructions into their own little array, where
> they will eventually be joined by AT, TLBI and a bunch of other CMOs.
> 
> Based on an initial patch by Jintack Lim.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 59 +++++++++++++++++++++++++++++----------
>  1 file changed, 44 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 041b11825578..501de653beb5 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -2197,16 +2197,6 @@ static u64 reset_hcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>   * guest...
>   */
>  static const struct sys_reg_desc sys_reg_descs[] = {
> -	{ SYS_DESC(SYS_DC_ISW), access_dcsw },
> -	{ SYS_DESC(SYS_DC_IGSW), access_dcgsw },
> -	{ SYS_DESC(SYS_DC_IGDSW), access_dcgsw },
> -	{ SYS_DESC(SYS_DC_CSW), access_dcsw },
> -	{ SYS_DESC(SYS_DC_CGSW), access_dcgsw },
> -	{ SYS_DESC(SYS_DC_CGDSW), access_dcgsw },
> -	{ SYS_DESC(SYS_DC_CISW), access_dcsw },
> -	{ SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
> -	{ SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
> -
>  	DBG_BCR_BVR_WCR_WVR_EL1(0),
>  	DBG_BCR_BVR_WCR_WVR_EL1(1),
>  	{ SYS_DESC(SYS_MDCCINT_EL1), trap_debug_regs, reset_val, MDCCINT_EL1, 0 },
> @@ -2738,6 +2728,18 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	EL2_REG(SP_EL2, NULL, reset_unknown, 0),
>  };
>  
> +static struct sys_reg_desc sys_insn_descs[] = {
> +	{ SYS_DESC(SYS_DC_ISW), access_dcsw },
> +	{ SYS_DESC(SYS_DC_IGSW), access_dcgsw },
> +	{ SYS_DESC(SYS_DC_IGDSW), access_dcgsw },
> +	{ SYS_DESC(SYS_DC_CSW), access_dcsw },
> +	{ SYS_DESC(SYS_DC_CGSW), access_dcgsw },
> +	{ SYS_DESC(SYS_DC_CGDSW), access_dcgsw },
> +	{ SYS_DESC(SYS_DC_CISW), access_dcsw },
> +	{ SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
> +	{ SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
> +};
> +
>  static const struct sys_reg_desc *first_idreg;
>  
>  static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
> @@ -3431,6 +3433,24 @@ static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
>  	return false;
>  }
>  
> +static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
> +{
> +	const struct sys_reg_desc *r;
> +
> +	/* Search from the system instruction table. */
> +	r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
> +
> +	if (likely(r)) {
> +		perform_access(vcpu, p, r);
> +	} else {
> +		kvm_err("Unsupported guest sys instruction at: %lx\n",
> +			*vcpu_pc(vcpu));
> +		print_sys_reg_instr(p);
> +		kvm_inject_undefined(vcpu);
> +	}
> +	return 1;
> +}
> +
>  static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
>  {
>  	const struct sys_reg_desc *idreg = first_idreg;
> @@ -3478,7 +3498,8 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
>  }
>  
>  /**
> - * kvm_handle_sys_reg -- handles a mrs/msr trap on a guest sys_reg access
> + * kvm_handle_sys_reg -- handles a system instruction or mrs/msr instruction
> + *			 trap on a guest execution
>   * @vcpu: The VCPU pointer
>   */
>  int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
> @@ -3495,12 +3516,19 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
>  	params = esr_sys64_to_params(esr);
>  	params.regval = vcpu_get_reg(vcpu, Rt);
>  
> -	if (!emulate_sys_reg(vcpu, &params))
> +	/* System register? */

Can you put a reference here, DDI487 J.a C5.1.2? (2 and 3 look magic otherwise)

> +	if (params.Op0 == 2 || params.Op0 == 3) {
> +		if (!emulate_sys_reg(vcpu, &params))
> +			return 1;
> +
> +		if (!params.is_write)
> +			vcpu_set_reg(vcpu, Rt, params.regval);
> +
>  		return 1;
> +	}
>  
> -	if (!params.is_write)
> -		vcpu_set_reg(vcpu, Rt, params.regval);
> -	return 1;
> +	/* Hints, PSTATE (Op0 == 0) and System instructions (Op0 == 1) */
> +	return emulate_sys_instr(vcpu, &params);
>  }
>  
>  /******************************************************************************
> @@ -3954,6 +3982,7 @@ int __init kvm_sys_reg_table_init(void)
>  	valid &= check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true);
>  	valid &= check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true);
>  	valid &= check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false);
> +	valid &= check_sysreg_table(sys_insn_descs, ARRAY_SIZE(sys_insn_descs), false);
>  
>  	if (!valid)
>  		return -EINVAL;

Otherwise,

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey


^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 12/25] KVM: arm64: nv: Move system instructions to their own sys_reg_desc array
@ 2024-01-24 16:23     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 16:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Salut,

On Mon, Jan 22, 2024 at 08:18:39PM +0000, Marc Zyngier wrote:
> As NV results in a bunch of system instructions being trapped, it makes
> sense to pull the system instructions into their own little array, where
> they will eventually be joined by AT, TLBI and a bunch of other CMOs.
> 
> Based on an initial patch by Jintack Lim.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 59 +++++++++++++++++++++++++++++----------
>  1 file changed, 44 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 041b11825578..501de653beb5 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -2197,16 +2197,6 @@ static u64 reset_hcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>   * guest...
>   */
>  static const struct sys_reg_desc sys_reg_descs[] = {
> -	{ SYS_DESC(SYS_DC_ISW), access_dcsw },
> -	{ SYS_DESC(SYS_DC_IGSW), access_dcgsw },
> -	{ SYS_DESC(SYS_DC_IGDSW), access_dcgsw },
> -	{ SYS_DESC(SYS_DC_CSW), access_dcsw },
> -	{ SYS_DESC(SYS_DC_CGSW), access_dcgsw },
> -	{ SYS_DESC(SYS_DC_CGDSW), access_dcgsw },
> -	{ SYS_DESC(SYS_DC_CISW), access_dcsw },
> -	{ SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
> -	{ SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
> -
>  	DBG_BCR_BVR_WCR_WVR_EL1(0),
>  	DBG_BCR_BVR_WCR_WVR_EL1(1),
>  	{ SYS_DESC(SYS_MDCCINT_EL1), trap_debug_regs, reset_val, MDCCINT_EL1, 0 },
> @@ -2738,6 +2728,18 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	EL2_REG(SP_EL2, NULL, reset_unknown, 0),
>  };
>  
> +static struct sys_reg_desc sys_insn_descs[] = {
> +	{ SYS_DESC(SYS_DC_ISW), access_dcsw },
> +	{ SYS_DESC(SYS_DC_IGSW), access_dcgsw },
> +	{ SYS_DESC(SYS_DC_IGDSW), access_dcgsw },
> +	{ SYS_DESC(SYS_DC_CSW), access_dcsw },
> +	{ SYS_DESC(SYS_DC_CGSW), access_dcgsw },
> +	{ SYS_DESC(SYS_DC_CGDSW), access_dcgsw },
> +	{ SYS_DESC(SYS_DC_CISW), access_dcsw },
> +	{ SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
> +	{ SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
> +};
> +
>  static const struct sys_reg_desc *first_idreg;
>  
>  static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
> @@ -3431,6 +3433,24 @@ static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
>  	return false;
>  }
>  
> +static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
> +{
> +	const struct sys_reg_desc *r;
> +
> +	/* Search from the system instruction table. */
> +	r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
> +
> +	if (likely(r)) {
> +		perform_access(vcpu, p, r);
> +	} else {
> +		kvm_err("Unsupported guest sys instruction at: %lx\n",
> +			*vcpu_pc(vcpu));
> +		print_sys_reg_instr(p);
> +		kvm_inject_undefined(vcpu);
> +	}
> +	return 1;
> +}
> +
>  static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
>  {
>  	const struct sys_reg_desc *idreg = first_idreg;
> @@ -3478,7 +3498,8 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
>  }
>  
>  /**
> - * kvm_handle_sys_reg -- handles a mrs/msr trap on a guest sys_reg access
> + * kvm_handle_sys_reg -- handles a system instruction or mrs/msr instruction
> + *			 trap on a guest execution
>   * @vcpu: The VCPU pointer
>   */
>  int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
> @@ -3495,12 +3516,19 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
>  	params = esr_sys64_to_params(esr);
>  	params.regval = vcpu_get_reg(vcpu, Rt);
>  
> -	if (!emulate_sys_reg(vcpu, &params))
> +	/* System register? */

Can you put a reference here, DDI487 J.a C5.1.2? (2 and 3 look magic otherwise)

> +	if (params.Op0 == 2 || params.Op0 == 3) {
> +		if (!emulate_sys_reg(vcpu, &params))
> +			return 1;
> +
> +		if (!params.is_write)
> +			vcpu_set_reg(vcpu, Rt, params.regval);
> +
>  		return 1;
> +	}
>  
> -	if (!params.is_write)
> -		vcpu_set_reg(vcpu, Rt, params.regval);
> -	return 1;
> +	/* Hints, PSTATE (Op0 == 0) and System instructions (Op0 == 1) */
> +	return emulate_sys_instr(vcpu, &params);
>  }
>  
>  /******************************************************************************
> @@ -3954,6 +3982,7 @@ int __init kvm_sys_reg_table_init(void)
>  	valid &= check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true);
>  	valid &= check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true);
>  	valid &= check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false);
> +	valid &= check_sysreg_table(sys_insn_descs, ARRAY_SIZE(sys_insn_descs), false);
>  
>  	if (!valid)
>  		return -EINVAL;

Otherwise,

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 13/25] KVM: arm64: Always populate the trap configuration xarray
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-24 16:25     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 16:25 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:40PM +0000, Marc Zyngier wrote:
> As we are going to rely more and more on the global xarray that
> contains the trap configuration, always populate it, even in the
> non-NV case.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 501de653beb5..77cd818c23b0 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3997,8 +3997,5 @@ int __init kvm_sys_reg_table_init(void)
>  	if (!first_idreg)
>  		return -EINVAL;
>  
> -	if (kvm_get_mode() == KVM_MODE_NV)
> -		return populate_nv_trap_config();
> -
> -	return 0;
> +	return populate_nv_trap_config();
>  }

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 13/25] KVM: arm64: Always populate the trap configuration xarray
@ 2024-01-24 16:25     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 16:25 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:40PM +0000, Marc Zyngier wrote:
> As we are going to rely more and more on the global xarray that
> contains the trap configuration, always populate it, even in the
> non-NV case.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 501de653beb5..77cd818c23b0 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3997,8 +3997,5 @@ int __init kvm_sys_reg_table_init(void)
>  	if (!first_idreg)
>  		return -EINVAL;
>  
> -	if (kvm_get_mode() == KVM_MODE_NV)
> -		return populate_nv_trap_config();
> -
> -	return 0;
> +	return populate_nv_trap_config();
>  }

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 14/25] KVM: arm64: Register AArch64 system register entries with the sysreg xarray
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-24 16:34     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 16:34 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:41PM +0000, Marc Zyngier wrote:
> In order to reduce the number of lookups that we have to perform
> when handling a sysreg, register each AArch64 sysreg descriptor
> with the global xarray. The index of the descriptor is stored
> as a 10 bit field in the data word.
> 
> Subsequent patches will retrieve and use the stored index.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h |  3 +++
>  arch/arm64/kvm/emulate-nested.c   | 39 +++++++++++++++++++++++++++++--
>  arch/arm64/kvm/sys_regs.c         | 11 ++++++++-
>  3 files changed, 50 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index fe35c59214ad..e7a6219f2929 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -1083,6 +1083,9 @@ int kvm_handle_cp10_id(struct kvm_vcpu *vcpu);
>  void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
>  
>  int __init kvm_sys_reg_table_init(void);
> +struct sys_reg_desc;
> +int __init populate_sysreg_config(const struct sys_reg_desc *sr,
> +				  unsigned int idx);
>  int __init populate_nv_trap_config(void);
>  
>  bool lock_all_vcpus(struct kvm *kvm);
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 59622636b723..342d43b66fda 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -427,12 +427,14 @@ static const complex_condition_check ccc[] = {
>   * [19:14]	bit number in the FGT register (6 bits)
>   * [20]		trap polarity (1 bit)
>   * [25:21]	FG filter (5 bits)
> - * [62:26]	Unused (37 bits)
> + * [35:26]	Main SysReg table index (10 bits)
> + * [62:36]	Unused (27 bits)
>   * [63]		RES0 - Must be zero, as lost on insertion in the xarray
>   */
>  #define TC_CGT_BITS	10
>  #define TC_FGT_BITS	4
>  #define TC_FGF_BITS	5
> +#define TC_MSR_BITS	10
>  
>  union trap_config {
>  	u64	val;
> @@ -442,7 +444,8 @@ union trap_config {
>  		unsigned long	bit:6;		 /* Bit number */
>  		unsigned long	pol:1;		 /* Polarity */
>  		unsigned long	fgf:TC_FGF_BITS; /* Fine Grained Filter */
> -		unsigned long	unused:37;	 /* Unused, should be zero */
> +		unsigned long	msr:TC_MSR_BITS; /* Main SysReg index */
> +		unsigned long	unused:27;	 /* Unused, should be zero */
>  		unsigned long	mbz:1;		 /* Must Be Zero */
>  	};
>  };
> @@ -1862,6 +1865,38 @@ int __init populate_nv_trap_config(void)
>  	return ret;
>  }
>  
> +int __init populate_sysreg_config(const struct sys_reg_desc *sr,
> +				  unsigned int idx)
> +{
> +	union trap_config tc;
> +	u32 encoding;
> +	void *ret;
> +
> +	/*
> +	 * 0 is a valid value for the index, but not for the storage.
> +	 * We'll store (idx+1), so check against an offset'd limit.
> +	 */
> +	if (idx >= (BIT(TC_MSR_BITS) - 1)) {
> +		kvm_err("sysreg %s (%d) out of range\n", sr->name, idx);
> +		return -EINVAL;
> +	}
> +
> +	encoding = sys_reg(sr->Op0, sr->Op1, sr->CRn, sr->CRm, sr->Op2);
> +	tc = get_trap_config(encoding);
> +
> +	if (tc.msr) {
> +		kvm_err("sysreg %s (%d) duplicate entry (%d)\n",
> +			sr->name, idx - 1, tc.msr);
> +		return -EINVAL;
> +	}
> +
> +	tc.msr = idx + 1;
> +	ret = xa_store(&sr_forward_xa, encoding,
> +		       xa_mk_value(tc.val), GFP_KERNEL);
> +
> +	return xa_err(ret);
> +}
> +
>  static enum trap_behaviour get_behaviour(struct kvm_vcpu *vcpu,
>  					 const struct trap_bits *tb)
>  {
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 77cd818c23b0..65319193e443 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3974,6 +3974,7 @@ int __init kvm_sys_reg_table_init(void)
>  	struct sys_reg_params params;
>  	bool valid = true;
>  	unsigned int i;
> +	int ret = 0;
>  
>  	/* Make sure tables are unique and in order. */
>  	valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false);
> @@ -3997,5 +3998,13 @@ int __init kvm_sys_reg_table_init(void)
>  	if (!first_idreg)
>  		return -EINVAL;
>  
> -	return populate_nv_trap_config();
> +	ret = populate_nv_trap_config();
> +
> +	for (i = 0; !ret && i < ARRAY_SIZE(sys_reg_descs); i++)
> +		ret = populate_sysreg_config(sys_reg_descs + i, i);
> +
> +	for (i = 0; !ret && i < ARRAY_SIZE(sys_insn_descs); i++)
> +		ret = populate_sysreg_config(sys_insn_descs + i, i);
> +
> +	return ret;
>  }

The choice of `msr` was a tiny bit confusing due to the conflict with the asm
instruction `msr`, but not enough to warrant renaming.

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 14/25] KVM: arm64: Register AArch64 system register entries with the sysreg xarray
@ 2024-01-24 16:34     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 16:34 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:41PM +0000, Marc Zyngier wrote:
> In order to reduce the number of lookups that we have to perform
> when handling a sysreg, register each AArch64 sysreg descriptor
> with the global xarray. The index of the descriptor is stored
> as a 10 bit field in the data word.
> 
> Subsequent patches will retrieve and use the stored index.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h |  3 +++
>  arch/arm64/kvm/emulate-nested.c   | 39 +++++++++++++++++++++++++++++--
>  arch/arm64/kvm/sys_regs.c         | 11 ++++++++-
>  3 files changed, 50 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index fe35c59214ad..e7a6219f2929 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -1083,6 +1083,9 @@ int kvm_handle_cp10_id(struct kvm_vcpu *vcpu);
>  void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
>  
>  int __init kvm_sys_reg_table_init(void);
> +struct sys_reg_desc;
> +int __init populate_sysreg_config(const struct sys_reg_desc *sr,
> +				  unsigned int idx);
>  int __init populate_nv_trap_config(void);
>  
>  bool lock_all_vcpus(struct kvm *kvm);
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 59622636b723..342d43b66fda 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -427,12 +427,14 @@ static const complex_condition_check ccc[] = {
>   * [19:14]	bit number in the FGT register (6 bits)
>   * [20]		trap polarity (1 bit)
>   * [25:21]	FG filter (5 bits)
> - * [62:26]	Unused (37 bits)
> + * [35:26]	Main SysReg table index (10 bits)
> + * [62:36]	Unused (27 bits)
>   * [63]		RES0 - Must be zero, as lost on insertion in the xarray
>   */
>  #define TC_CGT_BITS	10
>  #define TC_FGT_BITS	4
>  #define TC_FGF_BITS	5
> +#define TC_MSR_BITS	10
>  
>  union trap_config {
>  	u64	val;
> @@ -442,7 +444,8 @@ union trap_config {
>  		unsigned long	bit:6;		 /* Bit number */
>  		unsigned long	pol:1;		 /* Polarity */
>  		unsigned long	fgf:TC_FGF_BITS; /* Fine Grained Filter */
> -		unsigned long	unused:37;	 /* Unused, should be zero */
> +		unsigned long	msr:TC_MSR_BITS; /* Main SysReg index */
> +		unsigned long	unused:27;	 /* Unused, should be zero */
>  		unsigned long	mbz:1;		 /* Must Be Zero */
>  	};
>  };
> @@ -1862,6 +1865,38 @@ int __init populate_nv_trap_config(void)
>  	return ret;
>  }
>  
> +int __init populate_sysreg_config(const struct sys_reg_desc *sr,
> +				  unsigned int idx)
> +{
> +	union trap_config tc;
> +	u32 encoding;
> +	void *ret;
> +
> +	/*
> +	 * 0 is a valid value for the index, but not for the storage.
> +	 * We'll store (idx+1), so check against an offset'd limit.
> +	 */
> +	if (idx >= (BIT(TC_MSR_BITS) - 1)) {
> +		kvm_err("sysreg %s (%d) out of range\n", sr->name, idx);
> +		return -EINVAL;
> +	}
> +
> +	encoding = sys_reg(sr->Op0, sr->Op1, sr->CRn, sr->CRm, sr->Op2);
> +	tc = get_trap_config(encoding);
> +
> +	if (tc.msr) {
> +		kvm_err("sysreg %s (%d) duplicate entry (%d)\n",
> +			sr->name, idx - 1, tc.msr);
> +		return -EINVAL;
> +	}
> +
> +	tc.msr = idx + 1;
> +	ret = xa_store(&sr_forward_xa, encoding,
> +		       xa_mk_value(tc.val), GFP_KERNEL);
> +
> +	return xa_err(ret);
> +}
> +
>  static enum trap_behaviour get_behaviour(struct kvm_vcpu *vcpu,
>  					 const struct trap_bits *tb)
>  {
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 77cd818c23b0..65319193e443 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3974,6 +3974,7 @@ int __init kvm_sys_reg_table_init(void)
>  	struct sys_reg_params params;
>  	bool valid = true;
>  	unsigned int i;
> +	int ret = 0;
>  
>  	/* Make sure tables are unique and in order. */
>  	valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false);
> @@ -3997,5 +3998,13 @@ int __init kvm_sys_reg_table_init(void)
>  	if (!first_idreg)
>  		return -EINVAL;
>  
> -	return populate_nv_trap_config();
> +	ret = populate_nv_trap_config();
> +
> +	for (i = 0; !ret && i < ARRAY_SIZE(sys_reg_descs); i++)
> +		ret = populate_sysreg_config(sys_reg_descs + i, i);
> +
> +	for (i = 0; !ret && i < ARRAY_SIZE(sys_insn_descs); i++)
> +		ret = populate_sysreg_config(sys_insn_descs + i, i);
> +
> +	return ret;
>  }

The choice of `msr` was a tiny bit confusing due to the conflict with the asm
instruction `msr`, but not enough to warrant renaming.

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 14/25] KVM: arm64: Register AArch64 system register entries with the sysreg xarray
  2024-01-24 16:34     ` Joey Gouly
@ 2024-01-24 16:37       ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-24 16:37 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Wed, 24 Jan 2024 16:34:25 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> On Mon, Jan 22, 2024 at 08:18:41PM +0000, Marc Zyngier wrote:
> > In order to reduce the number of lookups that we have to perform
> > when handling a sysreg, register each AArch64 sysreg descriptor
> > with the global xarray. The index of the descriptor is stored
> > as a 10 bit field in the data word.
> > 
> > Subsequent patches will retrieve and use the stored index.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h |  3 +++
> >  arch/arm64/kvm/emulate-nested.c   | 39 +++++++++++++++++++++++++++++--
> >  arch/arm64/kvm/sys_regs.c         | 11 ++++++++-
> >  3 files changed, 50 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index fe35c59214ad..e7a6219f2929 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -1083,6 +1083,9 @@ int kvm_handle_cp10_id(struct kvm_vcpu *vcpu);
> >  void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
> >  
> >  int __init kvm_sys_reg_table_init(void);
> > +struct sys_reg_desc;
> > +int __init populate_sysreg_config(const struct sys_reg_desc *sr,
> > +				  unsigned int idx);
> >  int __init populate_nv_trap_config(void);
> >  
> >  bool lock_all_vcpus(struct kvm *kvm);
> > diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> > index 59622636b723..342d43b66fda 100644
> > --- a/arch/arm64/kvm/emulate-nested.c
> > +++ b/arch/arm64/kvm/emulate-nested.c
> > @@ -427,12 +427,14 @@ static const complex_condition_check ccc[] = {
> >   * [19:14]	bit number in the FGT register (6 bits)
> >   * [20]		trap polarity (1 bit)
> >   * [25:21]	FG filter (5 bits)
> > - * [62:26]	Unused (37 bits)
> > + * [35:26]	Main SysReg table index (10 bits)
> > + * [62:36]	Unused (27 bits)
> >   * [63]		RES0 - Must be zero, as lost on insertion in the xarray
> >   */
> >  #define TC_CGT_BITS	10
> >  #define TC_FGT_BITS	4
> >  #define TC_FGF_BITS	5
> > +#define TC_MSR_BITS	10
> >  
> >  union trap_config {
> >  	u64	val;
> > @@ -442,7 +444,8 @@ union trap_config {
> >  		unsigned long	bit:6;		 /* Bit number */
> >  		unsigned long	pol:1;		 /* Polarity */
> >  		unsigned long	fgf:TC_FGF_BITS; /* Fine Grained Filter */
> > -		unsigned long	unused:37;	 /* Unused, should be zero */
> > +		unsigned long	msr:TC_MSR_BITS; /* Main SysReg index */
> > +		unsigned long	unused:27;	 /* Unused, should be zero */
> >  		unsigned long	mbz:1;		 /* Must Be Zero */
> >  	};
> >  };
> > @@ -1862,6 +1865,38 @@ int __init populate_nv_trap_config(void)
> >  	return ret;
> >  }
> >  
> > +int __init populate_sysreg_config(const struct sys_reg_desc *sr,
> > +				  unsigned int idx)
> > +{
> > +	union trap_config tc;
> > +	u32 encoding;
> > +	void *ret;
> > +
> > +	/*
> > +	 * 0 is a valid value for the index, but not for the storage.
> > +	 * We'll store (idx+1), so check against an offset'd limit.
> > +	 */
> > +	if (idx >= (BIT(TC_MSR_BITS) - 1)) {
> > +		kvm_err("sysreg %s (%d) out of range\n", sr->name, idx);
> > +		return -EINVAL;
> > +	}
> > +
> > +	encoding = sys_reg(sr->Op0, sr->Op1, sr->CRn, sr->CRm, sr->Op2);
> > +	tc = get_trap_config(encoding);
> > +
> > +	if (tc.msr) {
> > +		kvm_err("sysreg %s (%d) duplicate entry (%d)\n",
> > +			sr->name, idx - 1, tc.msr);
> > +		return -EINVAL;
> > +	}
> > +
> > +	tc.msr = idx + 1;
> > +	ret = xa_store(&sr_forward_xa, encoding,
> > +		       xa_mk_value(tc.val), GFP_KERNEL);
> > +
> > +	return xa_err(ret);
> > +}
> > +
> >  static enum trap_behaviour get_behaviour(struct kvm_vcpu *vcpu,
> >  					 const struct trap_bits *tb)
> >  {
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 77cd818c23b0..65319193e443 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -3974,6 +3974,7 @@ int __init kvm_sys_reg_table_init(void)
> >  	struct sys_reg_params params;
> >  	bool valid = true;
> >  	unsigned int i;
> > +	int ret = 0;
> >  
> >  	/* Make sure tables are unique and in order. */
> >  	valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false);
> > @@ -3997,5 +3998,13 @@ int __init kvm_sys_reg_table_init(void)
> >  	if (!first_idreg)
> >  		return -EINVAL;
> >  
> > -	return populate_nv_trap_config();
> > +	ret = populate_nv_trap_config();
> > +
> > +	for (i = 0; !ret && i < ARRAY_SIZE(sys_reg_descs); i++)
> > +		ret = populate_sysreg_config(sys_reg_descs + i, i);
> > +
> > +	for (i = 0; !ret && i < ARRAY_SIZE(sys_insn_descs); i++)
> > +		ret = populate_sysreg_config(sys_insn_descs + i, i);
> > +
> > +	return ret;
> >  }
> 
> The choice of `msr` was a tiny bit confusing due to the conflict with the asm
> instruction `msr`, but not enough to warrant renaming.

No, that's actually a very good point. How about SRI (Sys Reg Index)?

> Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks!

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 14/25] KVM: arm64: Register AArch64 system register entries with the sysreg xarray
@ 2024-01-24 16:37       ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-24 16:37 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Wed, 24 Jan 2024 16:34:25 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> On Mon, Jan 22, 2024 at 08:18:41PM +0000, Marc Zyngier wrote:
> > In order to reduce the number of lookups that we have to perform
> > when handling a sysreg, register each AArch64 sysreg descriptor
> > with the global xarray. The index of the descriptor is stored
> > as a 10 bit field in the data word.
> > 
> > Subsequent patches will retrieve and use the stored index.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h |  3 +++
> >  arch/arm64/kvm/emulate-nested.c   | 39 +++++++++++++++++++++++++++++--
> >  arch/arm64/kvm/sys_regs.c         | 11 ++++++++-
> >  3 files changed, 50 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index fe35c59214ad..e7a6219f2929 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -1083,6 +1083,9 @@ int kvm_handle_cp10_id(struct kvm_vcpu *vcpu);
> >  void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
> >  
> >  int __init kvm_sys_reg_table_init(void);
> > +struct sys_reg_desc;
> > +int __init populate_sysreg_config(const struct sys_reg_desc *sr,
> > +				  unsigned int idx);
> >  int __init populate_nv_trap_config(void);
> >  
> >  bool lock_all_vcpus(struct kvm *kvm);
> > diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> > index 59622636b723..342d43b66fda 100644
> > --- a/arch/arm64/kvm/emulate-nested.c
> > +++ b/arch/arm64/kvm/emulate-nested.c
> > @@ -427,12 +427,14 @@ static const complex_condition_check ccc[] = {
> >   * [19:14]	bit number in the FGT register (6 bits)
> >   * [20]		trap polarity (1 bit)
> >   * [25:21]	FG filter (5 bits)
> > - * [62:26]	Unused (37 bits)
> > + * [35:26]	Main SysReg table index (10 bits)
> > + * [62:36]	Unused (27 bits)
> >   * [63]		RES0 - Must be zero, as lost on insertion in the xarray
> >   */
> >  #define TC_CGT_BITS	10
> >  #define TC_FGT_BITS	4
> >  #define TC_FGF_BITS	5
> > +#define TC_MSR_BITS	10
> >  
> >  union trap_config {
> >  	u64	val;
> > @@ -442,7 +444,8 @@ union trap_config {
> >  		unsigned long	bit:6;		 /* Bit number */
> >  		unsigned long	pol:1;		 /* Polarity */
> >  		unsigned long	fgf:TC_FGF_BITS; /* Fine Grained Filter */
> > -		unsigned long	unused:37;	 /* Unused, should be zero */
> > +		unsigned long	msr:TC_MSR_BITS; /* Main SysReg index */
> > +		unsigned long	unused:27;	 /* Unused, should be zero */
> >  		unsigned long	mbz:1;		 /* Must Be Zero */
> >  	};
> >  };
> > @@ -1862,6 +1865,38 @@ int __init populate_nv_trap_config(void)
> >  	return ret;
> >  }
> >  
> > +int __init populate_sysreg_config(const struct sys_reg_desc *sr,
> > +				  unsigned int idx)
> > +{
> > +	union trap_config tc;
> > +	u32 encoding;
> > +	void *ret;
> > +
> > +	/*
> > +	 * 0 is a valid value for the index, but not for the storage.
> > +	 * We'll store (idx+1), so check against an offset'd limit.
> > +	 */
> > +	if (idx >= (BIT(TC_MSR_BITS) - 1)) {
> > +		kvm_err("sysreg %s (%d) out of range\n", sr->name, idx);
> > +		return -EINVAL;
> > +	}
> > +
> > +	encoding = sys_reg(sr->Op0, sr->Op1, sr->CRn, sr->CRm, sr->Op2);
> > +	tc = get_trap_config(encoding);
> > +
> > +	if (tc.msr) {
> > +		kvm_err("sysreg %s (%d) duplicate entry (%d)\n",
> > +			sr->name, idx - 1, tc.msr);
> > +		return -EINVAL;
> > +	}
> > +
> > +	tc.msr = idx + 1;
> > +	ret = xa_store(&sr_forward_xa, encoding,
> > +		       xa_mk_value(tc.val), GFP_KERNEL);
> > +
> > +	return xa_err(ret);
> > +}
> > +
> >  static enum trap_behaviour get_behaviour(struct kvm_vcpu *vcpu,
> >  					 const struct trap_bits *tb)
> >  {
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 77cd818c23b0..65319193e443 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -3974,6 +3974,7 @@ int __init kvm_sys_reg_table_init(void)
> >  	struct sys_reg_params params;
> >  	bool valid = true;
> >  	unsigned int i;
> > +	int ret = 0;
> >  
> >  	/* Make sure tables are unique and in order. */
> >  	valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false);
> > @@ -3997,5 +3998,13 @@ int __init kvm_sys_reg_table_init(void)
> >  	if (!first_idreg)
> >  		return -EINVAL;
> >  
> > -	return populate_nv_trap_config();
> > +	ret = populate_nv_trap_config();
> > +
> > +	for (i = 0; !ret && i < ARRAY_SIZE(sys_reg_descs); i++)
> > +		ret = populate_sysreg_config(sys_reg_descs + i, i);
> > +
> > +	for (i = 0; !ret && i < ARRAY_SIZE(sys_insn_descs); i++)
> > +		ret = populate_sysreg_config(sys_insn_descs + i, i);
> > +
> > +	return ret;
> >  }
> 
> The choice of `msr` was a tiny bit confusing due to the conflict with the asm
> instruction `msr`, but not enough to warrant renaming.

No, that's actually a very good point. How about SRI (Sys Reg Index)?

> Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks!

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 15/25] KVM: arm64: Use the xarray as the primary sysreg/sysinsn walker
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-24 16:48     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 16:48 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:42PM +0000, Marc Zyngier wrote:
> Since we always start sysreg/sysinsn handling by searching the
> xarray, use it as the source of the index in the correct sys_reg_desc
> array.
> 
> This allows some cleanup, such as moving the handling of unknown
> sysregs in a single location.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_nested.h |  2 +-
>  arch/arm64/kvm/emulate-nested.c     | 36 +++++++++++-----
>  arch/arm64/kvm/sys_regs.c           | 64 +++++++++--------------------
>  3 files changed, 46 insertions(+), 56 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
> index 4882905357f4..68465f87d308 100644
> --- a/arch/arm64/include/asm/kvm_nested.h
> +++ b/arch/arm64/include/asm/kvm_nested.h
> @@ -60,7 +60,7 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
>  	return ttbr0 & ~GENMASK_ULL(63, 48);
>  }
>  
> -extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
> +extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_idx);
>  
>  int kvm_init_nv_sysregs(struct kvm *kvm);
>  
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 342d43b66fda..54ab4d240fc6 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2001,7 +2001,7 @@ static bool check_fgt_bit(struct kvm *kvm, bool is_read,
>  	return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
>  }
>  
> -bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
> +bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_index)
>  {
>  	union trap_config tc;
>  	enum trap_behaviour b;
> @@ -2009,9 +2009,6 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  	u32 sysreg;
>  	u64 esr, val;
>  
> -	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> -		return false;
> -
>  	esr = kvm_vcpu_get_esr(vcpu);
>  	sysreg = esr_sys64_to_sysreg(esr);
>  	is_read = (esr & ESR_ELx_SYS64_ISS_DIR_MASK) == ESR_ELx_SYS64_ISS_DIR_READ;
> @@ -2022,13 +2019,16 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  	 * A value of 0 for the whole entry means that we know nothing
>  	 * for this sysreg, and that it cannot be re-injected into the
>  	 * nested hypervisor. In this situation, let's cut it short.
> -	 *
> -	 * Note that ultimately, we could also make use of the xarray
> -	 * to store the index of the sysreg in the local descriptor
> -	 * array, avoiding another search... Hint, hint...
>  	 */
>  	if (!tc.val)
> -		return false;
> +		goto local;
> +
> +	/*
> +	 * If we're not nesting, immediately return to the caller, with the
> +	 * sysreg index, should we have it.
> +	 */
> +	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> +		goto local;
>  
>  	switch ((enum fgt_group_id)tc.fgt) {
>  	case __NO_FGT_GROUP__:
> @@ -2070,7 +2070,7 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  	case __NR_FGT_GROUP_IDS__:
>  		/* Something is really wrong, bail out */
>  		WARN_ONCE(1, "__NR_FGT_GROUP_IDS__");
> -		return false;
> +		goto local;
>  	}
>  
>  	if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(vcpu->kvm, is_read,
> @@ -2083,6 +2083,22 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  	    ((b & BEHAVE_FORWARD_WRITE) && !is_read))
>  		goto inject;
>  
> +local:
> +	if (!tc.msr) {
> +		struct sys_reg_params params;
> +
> +		params = esr_sys64_to_params(esr);
> +
> +		// IMPDEF range. See ARM DDI 0487E.a, section D12.3.2

I know you're just moving this code, but can we update the reference? It's
DDI0487 J.a, D18.3.2 Reserved encodings for IMPLEMENTATION DEFINED registers
now. Feel free to drop the title of the section if it looks too long!

> +		if (!(params.Op0 == 3 && (params.CRn & 0b1011) == 0b1011))
> +			print_sys_reg_msg(&params,
> +					  "Unsupported guest access at: %lx\n",
> +					  *vcpu_pc(vcpu));
> +		kvm_inject_undefined(vcpu);
> +		return true;
> +	}
> +
> +	*sr_index = tc.msr - 1;
>  	return false;
>  
>  inject:
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 65319193e443..794d1f8c9bfe 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3397,12 +3397,6 @@ int kvm_handle_cp14_32(struct kvm_vcpu *vcpu)
>  	return kvm_handle_cp_32(vcpu, &params, cp14_regs, ARRAY_SIZE(cp14_regs));
>  }
>  
> -static bool is_imp_def_sys_reg(struct sys_reg_params *params)
> -{
> -	// See ARM DDI 0487E.a, section D12.3.2
> -	return params->Op0 == 3 && (params->CRn & 0b1011) == 0b1011;
> -}
> -
>  /**
>   * emulate_sys_reg - Emulate a guest access to an AArch64 system register
>   * @vcpu: The VCPU pointer
> @@ -3411,44 +3405,22 @@ static bool is_imp_def_sys_reg(struct sys_reg_params *params)
>   * Return: true if the system register access was successful, false otherwise.
>   */
>  static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
> -			   struct sys_reg_params *params)
> +			    struct sys_reg_params *params)
>  {
>  	const struct sys_reg_desc *r;
>  
>  	r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
> -
>  	if (likely(r)) {
>  		perform_access(vcpu, params, r);
>  		return true;
>  	}
>  
> -	if (is_imp_def_sys_reg(params)) {
> -		kvm_inject_undefined(vcpu);
> -	} else {
> -		print_sys_reg_msg(params,
> -				  "Unsupported guest sys_reg access at: %lx [%08lx]\n",
> -				  *vcpu_pc(vcpu), *vcpu_cpsr(vcpu));
> -		kvm_inject_undefined(vcpu);
> -	}
> -	return false;
> -}
> -
> -static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
> -{
> -	const struct sys_reg_desc *r;
> -
> -	/* Search from the system instruction table. */
> -	r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
> +	print_sys_reg_msg(params,
> +			  "Unsupported guest sys_reg access at: %lx [%08lx]\n",
> +			  *vcpu_pc(vcpu), *vcpu_cpsr(vcpu));
> +	kvm_inject_undefined(vcpu);
>  
> -	if (likely(r)) {
> -		perform_access(vcpu, p, r);
> -	} else {
> -		kvm_err("Unsupported guest sys instruction at: %lx\n",
> -			*vcpu_pc(vcpu));
> -		print_sys_reg_instr(p);
> -		kvm_inject_undefined(vcpu);
> -	}
> -	return 1;
> +	return false;
>  }
>  
>  static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
> @@ -3504,31 +3476,33 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
>   */
>  int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
>  {
> +	const struct sys_reg_desc *desc = NULL;
>  	struct sys_reg_params params;
>  	unsigned long esr = kvm_vcpu_get_esr(vcpu);
>  	int Rt = kvm_vcpu_sys_get_rt(vcpu);
> +	int sr_idx;
>  
>  	trace_kvm_handle_sys_reg(esr);
>  
> -	if (__check_nv_sr_forward(vcpu))
> +	if (__check_nv_sr_forward(vcpu, &sr_idx))
>  		return 1;
>  
>  	params = esr_sys64_to_params(esr);
>  	params.regval = vcpu_get_reg(vcpu, Rt);
>  
> -	/* System register? */
> -	if (params.Op0 == 2 || params.Op0 == 3) {
> -		if (!emulate_sys_reg(vcpu, &params))
> -			return 1;
> +	if (params.Op0 == 2 || params.Op0 == 3)
> +		desc = &sys_reg_descs[sr_idx];
> +	else
> +		desc = &sys_insn_descs[sr_idx];
>  
> -		if (!params.is_write)
> -			vcpu_set_reg(vcpu, Rt, params.regval);
> +	perform_access(vcpu, &params, desc);
>  
> -		return 1;
> -	}
> +	/* Read from system register? */
> +	if (!params.is_write &&
> +	    (params.Op0 == 2 || params.Op0 == 3))
> +		vcpu_set_reg(vcpu, Rt, params.regval);
>  
> -	/* Hints, PSTATE (Op0 == 0) and System instructions (Op0 == 1) */
> -	return emulate_sys_instr(vcpu, &params);
> +	return 1;
>  }
>  
>  /******************************************************************************

Other than the ref update:

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 15/25] KVM: arm64: Use the xarray as the primary sysreg/sysinsn walker
@ 2024-01-24 16:48     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 16:48 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:42PM +0000, Marc Zyngier wrote:
> Since we always start sysreg/sysinsn handling by searching the
> xarray, use it as the source of the index in the correct sys_reg_desc
> array.
> 
> This allows some cleanup, such as moving the handling of unknown
> sysregs in a single location.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_nested.h |  2 +-
>  arch/arm64/kvm/emulate-nested.c     | 36 +++++++++++-----
>  arch/arm64/kvm/sys_regs.c           | 64 +++++++++--------------------
>  3 files changed, 46 insertions(+), 56 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
> index 4882905357f4..68465f87d308 100644
> --- a/arch/arm64/include/asm/kvm_nested.h
> +++ b/arch/arm64/include/asm/kvm_nested.h
> @@ -60,7 +60,7 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
>  	return ttbr0 & ~GENMASK_ULL(63, 48);
>  }
>  
> -extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
> +extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_idx);
>  
>  int kvm_init_nv_sysregs(struct kvm *kvm);
>  
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 342d43b66fda..54ab4d240fc6 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2001,7 +2001,7 @@ static bool check_fgt_bit(struct kvm *kvm, bool is_read,
>  	return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
>  }
>  
> -bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
> +bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_index)
>  {
>  	union trap_config tc;
>  	enum trap_behaviour b;
> @@ -2009,9 +2009,6 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  	u32 sysreg;
>  	u64 esr, val;
>  
> -	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> -		return false;
> -
>  	esr = kvm_vcpu_get_esr(vcpu);
>  	sysreg = esr_sys64_to_sysreg(esr);
>  	is_read = (esr & ESR_ELx_SYS64_ISS_DIR_MASK) == ESR_ELx_SYS64_ISS_DIR_READ;
> @@ -2022,13 +2019,16 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  	 * A value of 0 for the whole entry means that we know nothing
>  	 * for this sysreg, and that it cannot be re-injected into the
>  	 * nested hypervisor. In this situation, let's cut it short.
> -	 *
> -	 * Note that ultimately, we could also make use of the xarray
> -	 * to store the index of the sysreg in the local descriptor
> -	 * array, avoiding another search... Hint, hint...
>  	 */
>  	if (!tc.val)
> -		return false;
> +		goto local;
> +
> +	/*
> +	 * If we're not nesting, immediately return to the caller, with the
> +	 * sysreg index, should we have it.
> +	 */
> +	if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> +		goto local;
>  
>  	switch ((enum fgt_group_id)tc.fgt) {
>  	case __NO_FGT_GROUP__:
> @@ -2070,7 +2070,7 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  	case __NR_FGT_GROUP_IDS__:
>  		/* Something is really wrong, bail out */
>  		WARN_ONCE(1, "__NR_FGT_GROUP_IDS__");
> -		return false;
> +		goto local;
>  	}
>  
>  	if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(vcpu->kvm, is_read,
> @@ -2083,6 +2083,22 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
>  	    ((b & BEHAVE_FORWARD_WRITE) && !is_read))
>  		goto inject;
>  
> +local:
> +	if (!tc.msr) {
> +		struct sys_reg_params params;
> +
> +		params = esr_sys64_to_params(esr);
> +
> +		// IMPDEF range. See ARM DDI 0487E.a, section D12.3.2

I know you're just moving this code, but can we update the reference? It's
DDI0487 J.a, D18.3.2 Reserved encodings for IMPLEMENTATION DEFINED registers
now. Feel free to drop the title of the section if it looks too long!

> +		if (!(params.Op0 == 3 && (params.CRn & 0b1011) == 0b1011))
> +			print_sys_reg_msg(&params,
> +					  "Unsupported guest access at: %lx\n",
> +					  *vcpu_pc(vcpu));
> +		kvm_inject_undefined(vcpu);
> +		return true;
> +	}
> +
> +	*sr_index = tc.msr - 1;
>  	return false;
>  
>  inject:
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 65319193e443..794d1f8c9bfe 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3397,12 +3397,6 @@ int kvm_handle_cp14_32(struct kvm_vcpu *vcpu)
>  	return kvm_handle_cp_32(vcpu, &params, cp14_regs, ARRAY_SIZE(cp14_regs));
>  }
>  
> -static bool is_imp_def_sys_reg(struct sys_reg_params *params)
> -{
> -	// See ARM DDI 0487E.a, section D12.3.2
> -	return params->Op0 == 3 && (params->CRn & 0b1011) == 0b1011;
> -}
> -
>  /**
>   * emulate_sys_reg - Emulate a guest access to an AArch64 system register
>   * @vcpu: The VCPU pointer
> @@ -3411,44 +3405,22 @@ static bool is_imp_def_sys_reg(struct sys_reg_params *params)
>   * Return: true if the system register access was successful, false otherwise.
>   */
>  static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
> -			   struct sys_reg_params *params)
> +			    struct sys_reg_params *params)
>  {
>  	const struct sys_reg_desc *r;
>  
>  	r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
> -
>  	if (likely(r)) {
>  		perform_access(vcpu, params, r);
>  		return true;
>  	}
>  
> -	if (is_imp_def_sys_reg(params)) {
> -		kvm_inject_undefined(vcpu);
> -	} else {
> -		print_sys_reg_msg(params,
> -				  "Unsupported guest sys_reg access at: %lx [%08lx]\n",
> -				  *vcpu_pc(vcpu), *vcpu_cpsr(vcpu));
> -		kvm_inject_undefined(vcpu);
> -	}
> -	return false;
> -}
> -
> -static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
> -{
> -	const struct sys_reg_desc *r;
> -
> -	/* Search from the system instruction table. */
> -	r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
> +	print_sys_reg_msg(params,
> +			  "Unsupported guest sys_reg access at: %lx [%08lx]\n",
> +			  *vcpu_pc(vcpu), *vcpu_cpsr(vcpu));
> +	kvm_inject_undefined(vcpu);
>  
> -	if (likely(r)) {
> -		perform_access(vcpu, p, r);
> -	} else {
> -		kvm_err("Unsupported guest sys instruction at: %lx\n",
> -			*vcpu_pc(vcpu));
> -		print_sys_reg_instr(p);
> -		kvm_inject_undefined(vcpu);
> -	}
> -	return 1;
> +	return false;
>  }
>  
>  static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
> @@ -3504,31 +3476,33 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
>   */
>  int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
>  {
> +	const struct sys_reg_desc *desc = NULL;
>  	struct sys_reg_params params;
>  	unsigned long esr = kvm_vcpu_get_esr(vcpu);
>  	int Rt = kvm_vcpu_sys_get_rt(vcpu);
> +	int sr_idx;
>  
>  	trace_kvm_handle_sys_reg(esr);
>  
> -	if (__check_nv_sr_forward(vcpu))
> +	if (__check_nv_sr_forward(vcpu, &sr_idx))
>  		return 1;
>  
>  	params = esr_sys64_to_params(esr);
>  	params.regval = vcpu_get_reg(vcpu, Rt);
>  
> -	/* System register? */
> -	if (params.Op0 == 2 || params.Op0 == 3) {
> -		if (!emulate_sys_reg(vcpu, &params))
> -			return 1;
> +	if (params.Op0 == 2 || params.Op0 == 3)
> +		desc = &sys_reg_descs[sr_idx];
> +	else
> +		desc = &sys_insn_descs[sr_idx];
>  
> -		if (!params.is_write)
> -			vcpu_set_reg(vcpu, Rt, params.regval);
> +	perform_access(vcpu, &params, desc);
>  
> -		return 1;
> -	}
> +	/* Read from system register? */
> +	if (!params.is_write &&
> +	    (params.Op0 == 2 || params.Op0 == 3))
> +		vcpu_set_reg(vcpu, Rt, params.regval);
>  
> -	/* Hints, PSTATE (Op0 == 0) and System instructions (Op0 == 1) */
> -	return emulate_sys_instr(vcpu, &params);
> +	return 1;
>  }
>  
>  /******************************************************************************

Other than the ref update:

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 16/25] KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap()
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-24 16:57     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 16:57 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:43PM +0000, Marc Zyngier wrote:
> __check_nv_sr_forward() is not specific to NV anymore, and does
> a lot more. Rename it to triage_sysreg_trap(), making it plain
> that its role is to handle where an exception is to be handled.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_nested.h | 1 -
>  arch/arm64/kvm/emulate-nested.c     | 2 +-
>  arch/arm64/kvm/sys_regs.c           | 2 +-
>  arch/arm64/kvm/sys_regs.h           | 2 ++
>  4 files changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
> index 68465f87d308..c77d795556e1 100644
> --- a/arch/arm64/include/asm/kvm_nested.h
> +++ b/arch/arm64/include/asm/kvm_nested.h
> @@ -60,7 +60,6 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
>  	return ttbr0 & ~GENMASK_ULL(63, 48);
>  }
>  
> -extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_idx);
>  
>  int kvm_init_nv_sysregs(struct kvm *kvm);
>  
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 54ab4d240fc6..b39ced4ea331 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2001,7 +2001,7 @@ static bool check_fgt_bit(struct kvm *kvm, bool is_read,
>  	return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
>  }
>  
> -bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_index)
> +bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
>  {
>  	union trap_config tc;
>  	enum trap_behaviour b;
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 794d1f8c9bfe..c48bc2577162 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3484,7 +3484,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
>  
>  	trace_kvm_handle_sys_reg(esr);
>  
> -	if (__check_nv_sr_forward(vcpu, &sr_idx))
> +	if (triage_sysreg_trap(vcpu, &sr_idx))
>  		return 1;
>  
>  	params = esr_sys64_to_params(esr);
> diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> index c65c129b3500..997eea21ba2a 100644
> --- a/arch/arm64/kvm/sys_regs.h
> +++ b/arch/arm64/kvm/sys_regs.h
> @@ -233,6 +233,8 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
>  int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
>  			 const struct sys_reg_desc table[], unsigned int num);
>  
> +bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index);
> +
>  #define AA32(_x)	.aarch32_map = AA32_##_x
>  #define Op0(_x) 	.Op0 = _x
>  #define Op1(_x) 	.Op1 = _x

It's strange having triage_sysreg_trap() in emulate-nested.c, but moving that
would be churn for little benefit. Maybe once NV is all in.

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 16/25] KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap()
@ 2024-01-24 16:57     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 16:57 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:43PM +0000, Marc Zyngier wrote:
> __check_nv_sr_forward() is not specific to NV anymore, and does
> a lot more. Rename it to triage_sysreg_trap(), making it plain
> that its role is to handle where an exception is to be handled.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_nested.h | 1 -
>  arch/arm64/kvm/emulate-nested.c     | 2 +-
>  arch/arm64/kvm/sys_regs.c           | 2 +-
>  arch/arm64/kvm/sys_regs.h           | 2 ++
>  4 files changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
> index 68465f87d308..c77d795556e1 100644
> --- a/arch/arm64/include/asm/kvm_nested.h
> +++ b/arch/arm64/include/asm/kvm_nested.h
> @@ -60,7 +60,6 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
>  	return ttbr0 & ~GENMASK_ULL(63, 48);
>  }
>  
> -extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_idx);
>  
>  int kvm_init_nv_sysregs(struct kvm *kvm);
>  
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 54ab4d240fc6..b39ced4ea331 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2001,7 +2001,7 @@ static bool check_fgt_bit(struct kvm *kvm, bool is_read,
>  	return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
>  }
>  
> -bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_index)
> +bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
>  {
>  	union trap_config tc;
>  	enum trap_behaviour b;
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 794d1f8c9bfe..c48bc2577162 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3484,7 +3484,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
>  
>  	trace_kvm_handle_sys_reg(esr);
>  
> -	if (__check_nv_sr_forward(vcpu, &sr_idx))
> +	if (triage_sysreg_trap(vcpu, &sr_idx))
>  		return 1;
>  
>  	params = esr_sys64_to_params(esr);
> diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> index c65c129b3500..997eea21ba2a 100644
> --- a/arch/arm64/kvm/sys_regs.h
> +++ b/arch/arm64/kvm/sys_regs.h
> @@ -233,6 +233,8 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
>  int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
>  			 const struct sys_reg_desc table[], unsigned int num);
>  
> +bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index);
> +
>  #define AA32(_x)	.aarch32_map = AA32_##_x
>  #define Op0(_x) 	.Op0 = _x
>  #define Op1(_x) 	.Op1 = _x

It's strange having triage_sysreg_trap() in emulate-nested.c, but moving that
would be churn for little benefit. Maybe once NV is all in.

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 14/25] KVM: arm64: Register AArch64 system register entries with the sysreg xarray
  2024-01-24 16:37       ` Marc Zyngier
@ 2024-01-24 17:02         ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 17:02 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Wed, Jan 24, 2024 at 04:37:23PM +0000, Marc Zyngier wrote:
> On Wed, 24 Jan 2024 16:34:25 +0000,
> Joey Gouly <joey.gouly@arm.com> wrote:
> > 
> > On Mon, Jan 22, 2024 at 08:18:41PM +0000, Marc Zyngier wrote:
> > > In order to reduce the number of lookups that we have to perform
> > > when handling a sysreg, register each AArch64 sysreg descriptor
> > > with the global xarray. The index of the descriptor is stored
> > > as a 10 bit field in the data word.
> > > 
> > > Subsequent patches will retrieve and use the stored index.
> > > 
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  arch/arm64/include/asm/kvm_host.h |  3 +++
> > >  arch/arm64/kvm/emulate-nested.c   | 39 +++++++++++++++++++++++++++++--
> > >  arch/arm64/kvm/sys_regs.c         | 11 ++++++++-
> > >  3 files changed, 50 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > index fe35c59214ad..e7a6219f2929 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -1083,6 +1083,9 @@ int kvm_handle_cp10_id(struct kvm_vcpu *vcpu);
> > >  void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
> > >  
> > >  int __init kvm_sys_reg_table_init(void);
> > > +struct sys_reg_desc;
> > > +int __init populate_sysreg_config(const struct sys_reg_desc *sr,
> > > +				  unsigned int idx);
> > >  int __init populate_nv_trap_config(void);
> > >  
> > >  bool lock_all_vcpus(struct kvm *kvm);
> > > diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> > > index 59622636b723..342d43b66fda 100644
> > > --- a/arch/arm64/kvm/emulate-nested.c
> > > +++ b/arch/arm64/kvm/emulate-nested.c
> > > @@ -427,12 +427,14 @@ static const complex_condition_check ccc[] = {
> > >   * [19:14]	bit number in the FGT register (6 bits)
> > >   * [20]		trap polarity (1 bit)
> > >   * [25:21]	FG filter (5 bits)
> > > - * [62:26]	Unused (37 bits)
> > > + * [35:26]	Main SysReg table index (10 bits)
> > > + * [62:36]	Unused (27 bits)
> > >   * [63]		RES0 - Must be zero, as lost on insertion in the xarray
> > >   */
> > >  #define TC_CGT_BITS	10
> > >  #define TC_FGT_BITS	4
> > >  #define TC_FGF_BITS	5
> > > +#define TC_MSR_BITS	10
> > >  
> > >  union trap_config {
> > >  	u64	val;
> > > @@ -442,7 +444,8 @@ union trap_config {
> > >  		unsigned long	bit:6;		 /* Bit number */
> > >  		unsigned long	pol:1;		 /* Polarity */
> > >  		unsigned long	fgf:TC_FGF_BITS; /* Fine Grained Filter */
> > > -		unsigned long	unused:37;	 /* Unused, should be zero */
> > > +		unsigned long	msr:TC_MSR_BITS; /* Main SysReg index */
> > > +		unsigned long	unused:27;	 /* Unused, should be zero */
> > >  		unsigned long	mbz:1;		 /* Must Be Zero */
> > >  	};
> > >  };
> > > @@ -1862,6 +1865,38 @@ int __init populate_nv_trap_config(void)
> > >  	return ret;
> > >  }
> > >  
> > > +int __init populate_sysreg_config(const struct sys_reg_desc *sr,
> > > +				  unsigned int idx)
> > > +{
> > > +	union trap_config tc;
> > > +	u32 encoding;
> > > +	void *ret;
> > > +
> > > +	/*
> > > +	 * 0 is a valid value for the index, but not for the storage.
> > > +	 * We'll store (idx+1), so check against an offset'd limit.
> > > +	 */
> > > +	if (idx >= (BIT(TC_MSR_BITS) - 1)) {
> > > +		kvm_err("sysreg %s (%d) out of range\n", sr->name, idx);
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	encoding = sys_reg(sr->Op0, sr->Op1, sr->CRn, sr->CRm, sr->Op2);
> > > +	tc = get_trap_config(encoding);
> > > +
> > > +	if (tc.msr) {
> > > +		kvm_err("sysreg %s (%d) duplicate entry (%d)\n",
> > > +			sr->name, idx - 1, tc.msr);
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	tc.msr = idx + 1;
> > > +	ret = xa_store(&sr_forward_xa, encoding,
> > > +		       xa_mk_value(tc.val), GFP_KERNEL);
> > > +
> > > +	return xa_err(ret);
> > > +}
> > > +
> > >  static enum trap_behaviour get_behaviour(struct kvm_vcpu *vcpu,
> > >  					 const struct trap_bits *tb)
> > >  {
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index 77cd818c23b0..65319193e443 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -3974,6 +3974,7 @@ int __init kvm_sys_reg_table_init(void)
> > >  	struct sys_reg_params params;
> > >  	bool valid = true;
> > >  	unsigned int i;
> > > +	int ret = 0;
> > >  
> > >  	/* Make sure tables are unique and in order. */
> > >  	valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false);
> > > @@ -3997,5 +3998,13 @@ int __init kvm_sys_reg_table_init(void)
> > >  	if (!first_idreg)
> > >  		return -EINVAL;
> > >  
> > > -	return populate_nv_trap_config();
> > > +	ret = populate_nv_trap_config();
> > > +
> > > +	for (i = 0; !ret && i < ARRAY_SIZE(sys_reg_descs); i++)
> > > +		ret = populate_sysreg_config(sys_reg_descs + i, i);
> > > +
> > > +	for (i = 0; !ret && i < ARRAY_SIZE(sys_insn_descs); i++)
> > > +		ret = populate_sysreg_config(sys_insn_descs + i, i);
> > > +
> > > +	return ret;
> > >  }
> > 
> > The choice of `msr` was a tiny bit confusing due to the conflict with the asm
> > instruction `msr`, but not enough to warrant renaming.
> 
> No, that's actually a very good point. How about SRI (Sys Reg Index)?

LGTM

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 14/25] KVM: arm64: Register AArch64 system register entries with the sysreg xarray
@ 2024-01-24 17:02         ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 17:02 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Wed, Jan 24, 2024 at 04:37:23PM +0000, Marc Zyngier wrote:
> On Wed, 24 Jan 2024 16:34:25 +0000,
> Joey Gouly <joey.gouly@arm.com> wrote:
> > 
> > On Mon, Jan 22, 2024 at 08:18:41PM +0000, Marc Zyngier wrote:
> > > In order to reduce the number of lookups that we have to perform
> > > when handling a sysreg, register each AArch64 sysreg descriptor
> > > with the global xarray. The index of the descriptor is stored
> > > as a 10 bit field in the data word.
> > > 
> > > Subsequent patches will retrieve and use the stored index.
> > > 
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  arch/arm64/include/asm/kvm_host.h |  3 +++
> > >  arch/arm64/kvm/emulate-nested.c   | 39 +++++++++++++++++++++++++++++--
> > >  arch/arm64/kvm/sys_regs.c         | 11 ++++++++-
> > >  3 files changed, 50 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > index fe35c59214ad..e7a6219f2929 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -1083,6 +1083,9 @@ int kvm_handle_cp10_id(struct kvm_vcpu *vcpu);
> > >  void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
> > >  
> > >  int __init kvm_sys_reg_table_init(void);
> > > +struct sys_reg_desc;
> > > +int __init populate_sysreg_config(const struct sys_reg_desc *sr,
> > > +				  unsigned int idx);
> > >  int __init populate_nv_trap_config(void);
> > >  
> > >  bool lock_all_vcpus(struct kvm *kvm);
> > > diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> > > index 59622636b723..342d43b66fda 100644
> > > --- a/arch/arm64/kvm/emulate-nested.c
> > > +++ b/arch/arm64/kvm/emulate-nested.c
> > > @@ -427,12 +427,14 @@ static const complex_condition_check ccc[] = {
> > >   * [19:14]	bit number in the FGT register (6 bits)
> > >   * [20]		trap polarity (1 bit)
> > >   * [25:21]	FG filter (5 bits)
> > > - * [62:26]	Unused (37 bits)
> > > + * [35:26]	Main SysReg table index (10 bits)
> > > + * [62:36]	Unused (27 bits)
> > >   * [63]		RES0 - Must be zero, as lost on insertion in the xarray
> > >   */
> > >  #define TC_CGT_BITS	10
> > >  #define TC_FGT_BITS	4
> > >  #define TC_FGF_BITS	5
> > > +#define TC_MSR_BITS	10
> > >  
> > >  union trap_config {
> > >  	u64	val;
> > > @@ -442,7 +444,8 @@ union trap_config {
> > >  		unsigned long	bit:6;		 /* Bit number */
> > >  		unsigned long	pol:1;		 /* Polarity */
> > >  		unsigned long	fgf:TC_FGF_BITS; /* Fine Grained Filter */
> > > -		unsigned long	unused:37;	 /* Unused, should be zero */
> > > +		unsigned long	msr:TC_MSR_BITS; /* Main SysReg index */
> > > +		unsigned long	unused:27;	 /* Unused, should be zero */
> > >  		unsigned long	mbz:1;		 /* Must Be Zero */
> > >  	};
> > >  };
> > > @@ -1862,6 +1865,38 @@ int __init populate_nv_trap_config(void)
> > >  	return ret;
> > >  }
> > >  
> > > +int __init populate_sysreg_config(const struct sys_reg_desc *sr,
> > > +				  unsigned int idx)
> > > +{
> > > +	union trap_config tc;
> > > +	u32 encoding;
> > > +	void *ret;
> > > +
> > > +	/*
> > > +	 * 0 is a valid value for the index, but not for the storage.
> > > +	 * We'll store (idx+1), so check against an offset'd limit.
> > > +	 */
> > > +	if (idx >= (BIT(TC_MSR_BITS) - 1)) {
> > > +		kvm_err("sysreg %s (%d) out of range\n", sr->name, idx);
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	encoding = sys_reg(sr->Op0, sr->Op1, sr->CRn, sr->CRm, sr->Op2);
> > > +	tc = get_trap_config(encoding);
> > > +
> > > +	if (tc.msr) {
> > > +		kvm_err("sysreg %s (%d) duplicate entry (%d)\n",
> > > +			sr->name, idx - 1, tc.msr);
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	tc.msr = idx + 1;
> > > +	ret = xa_store(&sr_forward_xa, encoding,
> > > +		       xa_mk_value(tc.val), GFP_KERNEL);
> > > +
> > > +	return xa_err(ret);
> > > +}
> > > +
> > >  static enum trap_behaviour get_behaviour(struct kvm_vcpu *vcpu,
> > >  					 const struct trap_bits *tb)
> > >  {
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index 77cd818c23b0..65319193e443 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -3974,6 +3974,7 @@ int __init kvm_sys_reg_table_init(void)
> > >  	struct sys_reg_params params;
> > >  	bool valid = true;
> > >  	unsigned int i;
> > > +	int ret = 0;
> > >  
> > >  	/* Make sure tables are unique and in order. */
> > >  	valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false);
> > > @@ -3997,5 +3998,13 @@ int __init kvm_sys_reg_table_init(void)
> > >  	if (!first_idreg)
> > >  		return -EINVAL;
> > >  
> > > -	return populate_nv_trap_config();
> > > +	ret = populate_nv_trap_config();
> > > +
> > > +	for (i = 0; !ret && i < ARRAY_SIZE(sys_reg_descs); i++)
> > > +		ret = populate_sysreg_config(sys_reg_descs + i, i);
> > > +
> > > +	for (i = 0; !ret && i < ARRAY_SIZE(sys_insn_descs); i++)
> > > +		ret = populate_sysreg_config(sys_insn_descs + i, i);
> > > +
> > > +	return ret;
> > >  }
> > 
> > The choice of `msr` was a tiny bit confusing due to the conflict with the asm
> > instruction `msr`, but not enough to warrant renaming.
> 
> No, that's actually a very good point. How about SRI (Sys Reg Index)?

LGTM

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 19/25] KVM: arm64: Move existing feature disabling over to FGU infrastructure
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-24 17:16     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 17:16 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Greetings,

On Mon, Jan 22, 2024 at 08:18:46PM +0000, Marc Zyngier wrote:
> We already trap a bunch of existing features for the purpose of
> disabling them (MAIR2, POR, ACCDATA, SME...).
> 
> Let's move them over to our brand new FGU infrastructure.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h       |  4 ++++
>  arch/arm64/kvm/arm.c                    |  6 ++++++
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 17 +++--------------
>  arch/arm64/kvm/sys_regs.c               | 23 +++++++++++++++++++++++
>  4 files changed, 36 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 4e0ac507ca01..fe5ed4bcded0 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -297,6 +297,8 @@ struct kvm_arch {
>  #define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE		6
>  	/* Initial ID reg values loaded */
>  #define KVM_ARCH_FLAG_ID_REGS_INITIALIZED		7
> +	/* Fine-Grained UNDEF initialised */
> +#define KVM_ARCH_FLAG_FGU_INITIALIZED			8
>  	unsigned long flags;
>  
>  	/* VM-wide vCPU feature set */
> @@ -1112,6 +1114,8 @@ int __init populate_nv_trap_config(void);
>  bool lock_all_vcpus(struct kvm *kvm);
>  void unlock_all_vcpus(struct kvm *kvm);
>  
> +void kvm_init_sysreg(struct kvm_vcpu *);
> +
>  /* MMIO helpers */
>  void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data);
>  unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len);
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index c063e84fc72c..9f806c9b7d5d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -675,6 +675,12 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
>  			return ret;
>  	}
>  
> +	/*
> +	 * This needs to happen after NV has imposed its own restrictions on
> +	 * the feature set
> +	 */
> +	kvm_init_sysreg(vcpu);
> +
>  	ret = kvm_timer_enable(vcpu);
>  	if (ret)
>  		return ret;
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index a09149fd91ed..245f9c1ca666 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -157,7 +157,7 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
>  	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> -	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0, tmp;
> +	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0;
>  	u64 r_val, w_val;
>  
>  	CHECK_FGT_MASKS(HFGRTR_EL2);
> @@ -174,13 +174,6 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  	ctxt_sys_reg(hctxt, HFGRTR_EL2) = read_sysreg_s(SYS_HFGRTR_EL2);
>  	ctxt_sys_reg(hctxt, HFGWTR_EL2) = read_sysreg_s(SYS_HFGWTR_EL2);
>  
> -	if (cpus_have_final_cap(ARM64_SME)) {
> -		tmp = HFGxTR_EL2_nSMPRI_EL1_MASK | HFGxTR_EL2_nTPIDR2_EL0_MASK;
> -
> -		r_clr |= tmp;
> -		w_clr |= tmp;
> -	}
> -
>  	/*
>  	 * Trap guest writes to TCR_EL1 to prevent it from enabling HA or HD.
>  	 */
> @@ -195,15 +188,11 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  	compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
>  	compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
>  
> -	/* The default to trap everything not handled or supported in KVM. */
> -	tmp = HFGxTR_EL2_nAMAIR2_EL1 | HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nS2POR_EL1 |
> -	      HFGxTR_EL2_nPOR_EL1 | HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nACCDATA_EL1;
> -
> -	r_val = __HFGRTR_EL2_nMASK & ~tmp;
> +	r_val = __HFGRTR_EL2_nMASK;
>  	r_val |= r_set;
>  	r_val &= ~r_clr;
>  
> -	w_val = __HFGWTR_EL2_nMASK & ~tmp;
> +	w_val = __HFGWTR_EL2_nMASK;
>  	w_val |= w_set;
>  	w_val &= ~w_clr;
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index c48bc2577162..a62efd8a2959 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3943,6 +3943,29 @@ int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm, struct reg_mask_range *
>  	return 0;
>  }
>  
> +void kvm_init_sysreg(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm *kvm = vcpu->kvm;
> +
> +	mutex_lock(&kvm->arch.config_lock);
> +
> +	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
> +		goto out;
> +
> +	kvm->arch.fgu[HFGxTR_GROUP] = (HFGxTR_EL2_nAMAIR2_EL1		|
> +				       HFGxTR_EL2_nMAIR2_EL1		|
> +				       HFGxTR_EL2_nS2POR_EL1		|
> +				       HFGxTR_EL2_nPOR_EL1		|
> +				       HFGxTR_EL2_nPOR_EL0		|
> +				       HFGxTR_EL2_nACCDATA_EL1		|
> +				       HFGxTR_EL2_nSMPRI_EL1_MASK	|
> +				       HFGxTR_EL2_nTPIDR2_EL0_MASK);
> +
> +	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
> +out:
> +	mutex_unlock(&kvm->arch.config_lock);
> +}
> +
>  int __init kvm_sys_reg_table_init(void)
>  {
>  	struct sys_reg_params params;

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 19/25] KVM: arm64: Move existing feature disabling over to FGU infrastructure
@ 2024-01-24 17:16     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-24 17:16 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Greetings,

On Mon, Jan 22, 2024 at 08:18:46PM +0000, Marc Zyngier wrote:
> We already trap a bunch of existing features for the purpose of
> disabling them (MAIR2, POR, ACCDATA, SME...).
> 
> Let's move them over to our brand new FGU infrastructure.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h       |  4 ++++
>  arch/arm64/kvm/arm.c                    |  6 ++++++
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 17 +++--------------
>  arch/arm64/kvm/sys_regs.c               | 23 +++++++++++++++++++++++
>  4 files changed, 36 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 4e0ac507ca01..fe5ed4bcded0 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -297,6 +297,8 @@ struct kvm_arch {
>  #define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE		6
>  	/* Initial ID reg values loaded */
>  #define KVM_ARCH_FLAG_ID_REGS_INITIALIZED		7
> +	/* Fine-Grained UNDEF initialised */
> +#define KVM_ARCH_FLAG_FGU_INITIALIZED			8
>  	unsigned long flags;
>  
>  	/* VM-wide vCPU feature set */
> @@ -1112,6 +1114,8 @@ int __init populate_nv_trap_config(void);
>  bool lock_all_vcpus(struct kvm *kvm);
>  void unlock_all_vcpus(struct kvm *kvm);
>  
> +void kvm_init_sysreg(struct kvm_vcpu *);
> +
>  /* MMIO helpers */
>  void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data);
>  unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len);
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index c063e84fc72c..9f806c9b7d5d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -675,6 +675,12 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
>  			return ret;
>  	}
>  
> +	/*
> +	 * This needs to happen after NV has imposed its own restrictions on
> +	 * the feature set
> +	 */
> +	kvm_init_sysreg(vcpu);
> +
>  	ret = kvm_timer_enable(vcpu);
>  	if (ret)
>  		return ret;
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index a09149fd91ed..245f9c1ca666 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -157,7 +157,7 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
>  	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> -	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0, tmp;
> +	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0;
>  	u64 r_val, w_val;
>  
>  	CHECK_FGT_MASKS(HFGRTR_EL2);
> @@ -174,13 +174,6 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  	ctxt_sys_reg(hctxt, HFGRTR_EL2) = read_sysreg_s(SYS_HFGRTR_EL2);
>  	ctxt_sys_reg(hctxt, HFGWTR_EL2) = read_sysreg_s(SYS_HFGWTR_EL2);
>  
> -	if (cpus_have_final_cap(ARM64_SME)) {
> -		tmp = HFGxTR_EL2_nSMPRI_EL1_MASK | HFGxTR_EL2_nTPIDR2_EL0_MASK;
> -
> -		r_clr |= tmp;
> -		w_clr |= tmp;
> -	}
> -
>  	/*
>  	 * Trap guest writes to TCR_EL1 to prevent it from enabling HA or HD.
>  	 */
> @@ -195,15 +188,11 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  	compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
>  	compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
>  
> -	/* The default to trap everything not handled or supported in KVM. */
> -	tmp = HFGxTR_EL2_nAMAIR2_EL1 | HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nS2POR_EL1 |
> -	      HFGxTR_EL2_nPOR_EL1 | HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nACCDATA_EL1;
> -
> -	r_val = __HFGRTR_EL2_nMASK & ~tmp;
> +	r_val = __HFGRTR_EL2_nMASK;
>  	r_val |= r_set;
>  	r_val &= ~r_clr;
>  
> -	w_val = __HFGWTR_EL2_nMASK & ~tmp;
> +	w_val = __HFGWTR_EL2_nMASK;
>  	w_val |= w_set;
>  	w_val &= ~w_clr;
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index c48bc2577162..a62efd8a2959 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3943,6 +3943,29 @@ int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm, struct reg_mask_range *
>  	return 0;
>  }
>  
> +void kvm_init_sysreg(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm *kvm = vcpu->kvm;
> +
> +	mutex_lock(&kvm->arch.config_lock);
> +
> +	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
> +		goto out;
> +
> +	kvm->arch.fgu[HFGxTR_GROUP] = (HFGxTR_EL2_nAMAIR2_EL1		|
> +				       HFGxTR_EL2_nMAIR2_EL1		|
> +				       HFGxTR_EL2_nS2POR_EL1		|
> +				       HFGxTR_EL2_nPOR_EL1		|
> +				       HFGxTR_EL2_nPOR_EL0		|
> +				       HFGxTR_EL2_nACCDATA_EL1		|
> +				       HFGxTR_EL2_nSMPRI_EL1_MASK	|
> +				       HFGxTR_EL2_nTPIDR2_EL0_MASK);
> +
> +	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
> +out:
> +	mutex_unlock(&kvm->arch.config_lock);
> +}
> +
>  int __init kvm_sys_reg_table_init(void)
>  {
>  	struct sys_reg_params params;

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 20/25] KVM: arm64: Streamline save/restore of HFG[RW]TR_EL2
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-25 11:30     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-25 11:30 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:47PM +0000, Marc Zyngier wrote:
> The way we save/restore HFG[RW]TR_EL2 can now be simplified, and
> the Ampere erratum hack is the only thing that still stands out.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 42 ++++++-------------------
>  1 file changed, 9 insertions(+), 33 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 245f9c1ca666..2d5891518006 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -157,8 +157,6 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
>  	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> -	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0;
> -	u64 r_val, w_val;
>  
>  	CHECK_FGT_MASKS(HFGRTR_EL2);
>  	CHECK_FGT_MASKS(HFGWTR_EL2);
> @@ -171,34 +169,10 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  	if (!cpus_have_final_cap(ARM64_HAS_FGT))
>  		return;
>  
> -	ctxt_sys_reg(hctxt, HFGRTR_EL2) = read_sysreg_s(SYS_HFGRTR_EL2);
> -	ctxt_sys_reg(hctxt, HFGWTR_EL2) = read_sysreg_s(SYS_HFGWTR_EL2);
> -
> -	/*
> -	 * Trap guest writes to TCR_EL1 to prevent it from enabling HA or HD.
> -	 */
> -	if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
> -		w_set |= HFGxTR_EL2_TCR_EL1_MASK;
> -
> -	if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
> -		compute_clr_set(vcpu, HFGRTR_EL2, r_clr, r_set);
> -		compute_clr_set(vcpu, HFGWTR_EL2, w_clr, w_set);
> -	}
> -
> -	compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
> -	compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
> -
> -	r_val = __HFGRTR_EL2_nMASK;
> -	r_val |= r_set;
> -	r_val &= ~r_clr;
> -
> -	w_val = __HFGWTR_EL2_nMASK;
> -	w_val |= w_set;
> -	w_val &= ~w_clr;
> -
> -	write_sysreg_s(r_val, SYS_HFGRTR_EL2);
> -	write_sysreg_s(w_val, SYS_HFGWTR_EL2);
> -
> +	update_fgt_traps(hctxt, vcpu, kvm, HFGRTR_EL2);
> +	update_fgt_traps_cs(hctxt, vcpu, kvm, HFGWTR_EL2, 0,
> +			    cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38) ?
> +			    HFGxTR_EL2_TCR_EL1_MASK : 0);
>  	update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);
>  	update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);
>  	update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);
> @@ -223,9 +197,11 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  	if (!cpus_have_final_cap(ARM64_HAS_FGT))
>  		return;
>  
> -	write_sysreg_s(ctxt_sys_reg(hctxt, HFGRTR_EL2), SYS_HFGRTR_EL2);
> -	write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
> -
> +	__deactivate_fgt(hctxt, vcpu, kvm, HFGRTR_EL2);
> +	if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
> +		write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
> +	else
> +		__deactivate_fgt(hctxt, vcpu, kvm, HFGWTR_EL2);
>  	__deactivate_fgt(hctxt, vcpu, kvm, HFGITR_EL2);
>  	__deactivate_fgt(hctxt, vcpu, kvm, HDFGRTR_EL2);
>  	__deactivate_fgt(hctxt, vcpu, kvm, HDFGWTR_EL2);

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 20/25] KVM: arm64: Streamline save/restore of HFG[RW]TR_EL2
@ 2024-01-25 11:30     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-25 11:30 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:47PM +0000, Marc Zyngier wrote:
> The way we save/restore HFG[RW]TR_EL2 can now be simplified, and
> the Ampere erratum hack is the only thing that still stands out.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 42 ++++++-------------------
>  1 file changed, 9 insertions(+), 33 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 245f9c1ca666..2d5891518006 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -157,8 +157,6 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
>  	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> -	u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0;
> -	u64 r_val, w_val;
>  
>  	CHECK_FGT_MASKS(HFGRTR_EL2);
>  	CHECK_FGT_MASKS(HFGWTR_EL2);
> @@ -171,34 +169,10 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  	if (!cpus_have_final_cap(ARM64_HAS_FGT))
>  		return;
>  
> -	ctxt_sys_reg(hctxt, HFGRTR_EL2) = read_sysreg_s(SYS_HFGRTR_EL2);
> -	ctxt_sys_reg(hctxt, HFGWTR_EL2) = read_sysreg_s(SYS_HFGWTR_EL2);
> -
> -	/*
> -	 * Trap guest writes to TCR_EL1 to prevent it from enabling HA or HD.
> -	 */
> -	if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
> -		w_set |= HFGxTR_EL2_TCR_EL1_MASK;
> -
> -	if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
> -		compute_clr_set(vcpu, HFGRTR_EL2, r_clr, r_set);
> -		compute_clr_set(vcpu, HFGWTR_EL2, w_clr, w_set);
> -	}
> -
> -	compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
> -	compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
> -
> -	r_val = __HFGRTR_EL2_nMASK;
> -	r_val |= r_set;
> -	r_val &= ~r_clr;
> -
> -	w_val = __HFGWTR_EL2_nMASK;
> -	w_val |= w_set;
> -	w_val &= ~w_clr;
> -
> -	write_sysreg_s(r_val, SYS_HFGRTR_EL2);
> -	write_sysreg_s(w_val, SYS_HFGWTR_EL2);
> -
> +	update_fgt_traps(hctxt, vcpu, kvm, HFGRTR_EL2);
> +	update_fgt_traps_cs(hctxt, vcpu, kvm, HFGWTR_EL2, 0,
> +			    cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38) ?
> +			    HFGxTR_EL2_TCR_EL1_MASK : 0);
>  	update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);
>  	update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);
>  	update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);
> @@ -223,9 +197,11 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  	if (!cpus_have_final_cap(ARM64_HAS_FGT))
>  		return;
>  
> -	write_sysreg_s(ctxt_sys_reg(hctxt, HFGRTR_EL2), SYS_HFGRTR_EL2);
> -	write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
> -
> +	__deactivate_fgt(hctxt, vcpu, kvm, HFGRTR_EL2);
> +	if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
> +		write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
> +	else
> +		__deactivate_fgt(hctxt, vcpu, kvm, HFGWTR_EL2);
>  	__deactivate_fgt(hctxt, vcpu, kvm, HFGITR_EL2);
>  	__deactivate_fgt(hctxt, vcpu, kvm, HDFGRTR_EL2);
>  	__deactivate_fgt(hctxt, vcpu, kvm, HDFGWTR_EL2);

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 21/25] KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-25 13:30     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-25 13:30 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Question,

On Mon, Jan 22, 2024 at 08:18:48PM +0000, Marc Zyngier wrote:
> Outer Shareable and Range TLBI instructions shouldn't be made available
> to the guest if they are not advertised. Use FGU to disable those,
> and set HCR_EL2.TLBIOS in the case the host doesn't have FGT.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++++++
>  1 file changed, 29 insertions(+)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index a62efd8a2959..3c939ea4a28f 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3949,6 +3949,9 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
>  
>  	mutex_lock(&kvm->arch.config_lock);
>  
> +	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> +		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
> +
>  	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
>  		goto out;
>  
> @@ -3961,6 +3964,32 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
>  				       HFGxTR_EL2_nSMPRI_EL1_MASK	|
>  				       HFGxTR_EL2_nTPIDR2_EL0_MASK);
>  
> +	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> +		kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS|
> +						HFGITR_EL2_TLBIRVALE1OS	|
> +						HFGITR_EL2_TLBIRVAAE1OS	|
> +						HFGITR_EL2_TLBIRVAE1OS	|
> +						HFGITR_EL2_TLBIVAALE1OS	|
> +						HFGITR_EL2_TLBIVALE1OS	|
> +						HFGITR_EL2_TLBIVAAE1OS	|
> +						HFGITR_EL2_TLBIASIDE1OS	|
> +						HFGITR_EL2_TLBIVAE1OS	|
> +						HFGITR_EL2_TLBIVMALLE1OS);
> +
> +	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
> +		kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1	|
> +						HFGITR_EL2_TLBIRVALE1	|
> +						HFGITR_EL2_TLBIRVAAE1	|
> +						HFGITR_EL2_TLBIRVAE1	|
> +						HFGITR_EL2_TLBIRVAALE1IS|
> +						HFGITR_EL2_TLBIRVALE1IS	|
> +						HFGITR_EL2_TLBIRVAAE1IS	|
> +						HFGITR_EL2_TLBIRVAE1IS	|
> +						HFGITR_EL2_TLBIRVAALE1OS|
> +						HFGITR_EL2_TLBIRVALE1OS	|
> +						HFGITR_EL2_TLBIRVAAE1OS	|
> +						HFGITR_EL2_TLBIRVAE1OS);
> +
>  	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
>  out:
>  	mutex_unlock(&kvm->arch.config_lock);

I think I'm right in saying..

If the VM is running on a platform with ID_AA64ISAR0_EL1.TLB=0b010 (Outer
Shareable and TLB range maintenance instructions are implemented.) but without
support for FEAT_FGT, and the VMM sets the ID reg to ID_AA64ISAR0_EL1.TLB=0,
this change will trap the TLBI *OS instructions but not the following: 

    TLBI RVAAE1
    TLBI RVAAE1IS
    TLBI RVAALE1
    TLBI RVAALE1IS
    TLBI RVAE1
    TLBI RVAE1IS
    TLBI RVALE1
    TLBI RVALE1IS

These TLB range instructions only trap with HCR_EL2.TTLB, however that traps
all TLB instructions. You may have left this off intentionally, if so can you
add something to the commit message.

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 21/25] KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest
@ 2024-01-25 13:30     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-25 13:30 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

Question,

On Mon, Jan 22, 2024 at 08:18:48PM +0000, Marc Zyngier wrote:
> Outer Shareable and Range TLBI instructions shouldn't be made available
> to the guest if they are not advertised. Use FGU to disable those,
> and set HCR_EL2.TLBIOS in the case the host doesn't have FGT.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++++++
>  1 file changed, 29 insertions(+)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index a62efd8a2959..3c939ea4a28f 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3949,6 +3949,9 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
>  
>  	mutex_lock(&kvm->arch.config_lock);
>  
> +	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> +		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
> +
>  	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
>  		goto out;
>  
> @@ -3961,6 +3964,32 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
>  				       HFGxTR_EL2_nSMPRI_EL1_MASK	|
>  				       HFGxTR_EL2_nTPIDR2_EL0_MASK);
>  
> +	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> +		kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS|
> +						HFGITR_EL2_TLBIRVALE1OS	|
> +						HFGITR_EL2_TLBIRVAAE1OS	|
> +						HFGITR_EL2_TLBIRVAE1OS	|
> +						HFGITR_EL2_TLBIVAALE1OS	|
> +						HFGITR_EL2_TLBIVALE1OS	|
> +						HFGITR_EL2_TLBIVAAE1OS	|
> +						HFGITR_EL2_TLBIASIDE1OS	|
> +						HFGITR_EL2_TLBIVAE1OS	|
> +						HFGITR_EL2_TLBIVMALLE1OS);
> +
> +	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
> +		kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1	|
> +						HFGITR_EL2_TLBIRVALE1	|
> +						HFGITR_EL2_TLBIRVAAE1	|
> +						HFGITR_EL2_TLBIRVAE1	|
> +						HFGITR_EL2_TLBIRVAALE1IS|
> +						HFGITR_EL2_TLBIRVALE1IS	|
> +						HFGITR_EL2_TLBIRVAAE1IS	|
> +						HFGITR_EL2_TLBIRVAE1IS	|
> +						HFGITR_EL2_TLBIRVAALE1OS|
> +						HFGITR_EL2_TLBIRVALE1OS	|
> +						HFGITR_EL2_TLBIRVAAE1OS	|
> +						HFGITR_EL2_TLBIRVAE1OS);
> +
>  	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
>  out:
>  	mutex_unlock(&kvm->arch.config_lock);

I think I'm right in saying..

If the VM is running on a platform with ID_AA64ISAR0_EL1.TLB=0b010 (Outer
Shareable and TLB range maintenance instructions are implemented.) but without
support for FEAT_FGT, and the VMM sets the ID reg to ID_AA64ISAR0_EL1.TLB=0,
this change will trap the TLBI *OS instructions but not the following: 

    TLBI RVAAE1
    TLBI RVAAE1IS
    TLBI RVAALE1
    TLBI RVAALE1IS
    TLBI RVAE1
    TLBI RVAE1IS
    TLBI RVALE1
    TLBI RVALE1IS

These TLB range instructions only trap with HCR_EL2.TTLB, however that traps
all TLB instructions. You may have left this off intentionally, if so can you
add something to the commit message.

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 23/25] KVM: arm64: Make AMU sysreg UNDEF if FEAT_AMU is not advertised to the guest
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-25 13:42     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-25 13:42 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:50PM +0000, Marc Zyngier wrote:
> No AMU? No AMU! IF we see an AMU-related trap, let's turn it into
> an UNDEF!
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index bcde43b81755..afe6975fcf5c 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3994,6 +3994,10 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
>  		kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 |
>  						HFGxTR_EL2_nPIR_EL1);
>  
> +	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP))
> +		kvm->arch.fgu[HAFGRTR_GROUP] |= ~(HAFGRTR_EL2_RES0 |
> +						  HAFGRTR_EL2_RES1);
> +
>  	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
>  out:
>  	mutex_unlock(&kvm->arch.config_lock);

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 23/25] KVM: arm64: Make AMU sysreg UNDEF if FEAT_AMU is not advertised to the guest
@ 2024-01-25 13:42     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-25 13:42 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:50PM +0000, Marc Zyngier wrote:
> No AMU? No AMU! IF we see an AMU-related trap, let's turn it into
> an UNDEF!
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index bcde43b81755..afe6975fcf5c 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3994,6 +3994,10 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
>  		kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 |
>  						HFGxTR_EL2_nPIR_EL1);
>  
> +	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP))
> +		kvm->arch.fgu[HAFGRTR_GROUP] |= ~(HAFGRTR_EL2_RES0 |
> +						  HAFGRTR_EL2_RES1);
> +
>  	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
>  out:
>  	mutex_unlock(&kvm->arch.config_lock);

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-25 16:25     ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-25 16:25 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:51PM +0000, Marc Zyngier wrote:
> We unconditionally enable FEAT_MOPS, which is obviously wrong.
> 
> So let's only do that when it is advertised to the guest.
> Which means we need to rely on a per-vcpu HCRX_EL2 shadow register.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_arm.h        | 4 +---
>  arch/arm64/include/asm/kvm_host.h       | 1 +
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +-
>  arch/arm64/kvm/sys_regs.c               | 8 ++++++++
>  4 files changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 3c6f8ba1e479..a1769e415d72 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -102,9 +102,7 @@
>  #define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
>  #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
>  
> -#define HCRX_GUEST_FLAGS \
> -	(HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \
> -	 (cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0))
> +#define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En)
>  #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En)
>  
>  /* TCR_EL2 Registers bits */
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index fe5ed4bcded0..22343354db3e 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -584,6 +584,7 @@ struct kvm_vcpu_arch {
>  
>  	/* Values of trap registers for the guest. */
>  	u64 hcr_el2;
> +	u64 hcrx_el2;
>  	u64 mdcr_el2;
>  	u64 cptr_el2;
>  
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 2d5891518006..e3fcf8c4d5b4 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -236,7 +236,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
>  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>  
>  	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> -		u64 hcrx = HCRX_GUEST_FLAGS;
> +		u64 hcrx = vcpu->arch.hcrx_el2;
>  		if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
>  			u64 clr = 0, set = 0;
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index afe6975fcf5c..b7977e08e4ef 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3952,6 +3952,14 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
>  	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
>  		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
>  
> +	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> +		vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS;
> +
> +		if (kvm_has_feat(kern_hyp_va(vcpu->kvm),

Not sure if the use of kern_hyp_va is intentional, seems out of place since we
use the bare `kvm` variable everyone else.

> +				 ID_AA64ISAR2_EL1, MOPS, IMP))
> +			vcpu->arch.hcrx_el2 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
> +	}
> +
>  	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
>  		goto out;
>  

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest
@ 2024-01-25 16:25     ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-25 16:25 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Mon, Jan 22, 2024 at 08:18:51PM +0000, Marc Zyngier wrote:
> We unconditionally enable FEAT_MOPS, which is obviously wrong.
> 
> So let's only do that when it is advertised to the guest.
> Which means we need to rely on a per-vcpu HCRX_EL2 shadow register.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_arm.h        | 4 +---
>  arch/arm64/include/asm/kvm_host.h       | 1 +
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +-
>  arch/arm64/kvm/sys_regs.c               | 8 ++++++++
>  4 files changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 3c6f8ba1e479..a1769e415d72 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -102,9 +102,7 @@
>  #define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
>  #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
>  
> -#define HCRX_GUEST_FLAGS \
> -	(HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \
> -	 (cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0))
> +#define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En)
>  #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En)
>  
>  /* TCR_EL2 Registers bits */
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index fe5ed4bcded0..22343354db3e 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -584,6 +584,7 @@ struct kvm_vcpu_arch {
>  
>  	/* Values of trap registers for the guest. */
>  	u64 hcr_el2;
> +	u64 hcrx_el2;
>  	u64 mdcr_el2;
>  	u64 cptr_el2;
>  
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 2d5891518006..e3fcf8c4d5b4 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -236,7 +236,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
>  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>  
>  	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> -		u64 hcrx = HCRX_GUEST_FLAGS;
> +		u64 hcrx = vcpu->arch.hcrx_el2;
>  		if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
>  			u64 clr = 0, set = 0;
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index afe6975fcf5c..b7977e08e4ef 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3952,6 +3952,14 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
>  	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
>  		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
>  
> +	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> +		vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS;
> +
> +		if (kvm_has_feat(kern_hyp_va(vcpu->kvm),

Not sure if the use of kern_hyp_va is intentional, seems out of place since we
use the bare `kvm` variable everyone else.

> +				 ID_AA64ISAR2_EL1, MOPS, IMP))
> +			vcpu->arch.hcrx_el2 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
> +	}
> +
>  	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
>  		goto out;
>  

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest
  2024-01-25 16:25     ` Joey Gouly
@ 2024-01-25 17:35       ` Joey Gouly
  -1 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-25 17:35 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

It's me again!

On Thu, Jan 25, 2024 at 04:25:38PM +0000, Joey Gouly wrote:
> On Mon, Jan 22, 2024 at 08:18:51PM +0000, Marc Zyngier wrote:
> > We unconditionally enable FEAT_MOPS, which is obviously wrong.
> > 
> > So let's only do that when it is advertised to the guest.
> > Which means we need to rely on a per-vcpu HCRX_EL2 shadow register.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_arm.h        | 4 +---
> >  arch/arm64/include/asm/kvm_host.h       | 1 +
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +-
> >  arch/arm64/kvm/sys_regs.c               | 8 ++++++++
> >  4 files changed, 11 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> > index 3c6f8ba1e479..a1769e415d72 100644
> > --- a/arch/arm64/include/asm/kvm_arm.h
> > +++ b/arch/arm64/include/asm/kvm_arm.h
> > @@ -102,9 +102,7 @@
> >  #define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
> >  #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
> >  
> > -#define HCRX_GUEST_FLAGS \
> > -	(HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \
> > -	 (cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0))
> > +#define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En)
> >  #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En)
> >  
> >  /* TCR_EL2 Registers bits */
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index fe5ed4bcded0..22343354db3e 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -584,6 +584,7 @@ struct kvm_vcpu_arch {
> >  
> >  	/* Values of trap registers for the guest. */
> >  	u64 hcr_el2;
> > +	u64 hcrx_el2;
> >  	u64 mdcr_el2;
> >  	u64 cptr_el2;
> >  
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 2d5891518006..e3fcf8c4d5b4 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -236,7 +236,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
> >  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
> >  
> >  	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> > -		u64 hcrx = HCRX_GUEST_FLAGS;
> > +		u64 hcrx = vcpu->arch.hcrx_el2;
> >  		if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
> >  			u64 clr = 0, set = 0;
> >  
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index afe6975fcf5c..b7977e08e4ef 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -3952,6 +3952,14 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
> >  	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> >  		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
> >  
> > +	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> > +		vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS;
> > +
> > +		if (kvm_has_feat(kern_hyp_va(vcpu->kvm),
> 
> Not sure if the use of kern_hyp_va is intentional, seems out of place since we
> use the bare `kvm` variable everyone else.

My conclusion is that it's a mistake (kvm-arm.mode=nvhe):

[ 2707.523935] Unable to handle kernel paging request at virtual address 0000d34801b3fdb0                                                                      
[ 2707.523945] Mem abort info:                                                                                                                                 
[ 2707.523951]   ESR = 0x0000000096000004                                                                                                                      
[ 2707.523957]   EC = 0x25: DABT (current EL), IL = 32 bits                                                                                                    
[ 2707.523966]   SET = 0, FnV = 0                                                                                                                              
[ 2707.523973]   EA = 0, S1PTW = 0                                                                                                                             
[ 2707.523980]   FSC = 0x04: level 0 translation fault                                                                                                         
[ 2707.523988] Data abort info:                                                                                                                                
[ 2707.523993]   ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000
[ 2707.524001]   CM = 0, WnR = 0, TnD = 0, TagAccess = 0
[ 2707.524010]   GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
[ 2707.524019] user pgtable: 4k pages, 48-bit VAs, pgdp=000000088341d000
[ 2707.524029] [0000d34801b3fdb0] pgd=0000000000000000, p4d=0000000000000000
[ 2707.524043] Internal error: Oops: 0000000096000004 [#1] PREEMPT SMP
[ 2707.524053] Modules linked in:
[ 2707.524060] CPU: 0 PID: 95 Comm: kvm-vcpu-0 Tainted: G                T  6.8.0-rc1-asahi+ #4542 a70fa90dc88a9bc3f39943d7335081d8cc583f45
[ 2707.524076] Hardware name: FVP Base RevC (DT)
[ 2707.524083] pstate: 141402005 (nZcv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--) 
[ 2707.524096] pc : kvm_init_sysreg+0x100/0x398
[ 2707.524106] lr : kvm_init_sysreg+0xd8/0x398
[ 2707.524116] sp : ffff8000808e3a60
[ 2707.524123] x29: ffff8000808e3a60 x28: ffff000802f0d640 x27: 0000000000000001
[ 2707.524141] x26: 0000000000000000 x25: ffff000802f30000 x24: 0000000000000000
[ 2707.524159] x23: ffff000800189100 x22: ffff000802f0d640 x21: ffff000801b3fbc8
[ 2707.524177] x20: ffff000802f30000 x19: ffff000801b3f000 x18: ffffffffffffffff
[ 2707.524195] x17: 0000000000000000 x16: 0000000000000000 x15: ffff8001008e36e7
[ 2707.524213] x14: 0000000000000000 x13: ffffd7c64c841608 x12: 0000000000000537
[ 2707.524230] x11: 00000000000001bd x10: ffffd7c64c899608 x9 : ffffd7c64c841608
[ 2707.524248] x8 : 00000000ffffefff x7 : ffffd7c64c899608 x6 : 0000000000000000
[ 2707.524266] x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000
[ 2707.524283] x2 : ffff000802f0d640 x1 : 0000000000004020 x0 : 0000d34801b3f000
[ 2707.524301] Call trace:
[ 2707.524306]  kvm_init_sysreg+0x100/0x398
[ 2707.524316]  kvm_arch_vcpu_run_pid_change+0xe8/0x3f4
[ 2707.524330]  kvm_vcpu_ioctl+0x878/0x944
[ 2707.524341]  __arm64_sys_ioctl+0x404/0xc68

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest
@ 2024-01-25 17:35       ` Joey Gouly
  0 siblings, 0 replies; 114+ messages in thread
From: Joey Gouly @ 2024-01-25 17:35 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

It's me again!

On Thu, Jan 25, 2024 at 04:25:38PM +0000, Joey Gouly wrote:
> On Mon, Jan 22, 2024 at 08:18:51PM +0000, Marc Zyngier wrote:
> > We unconditionally enable FEAT_MOPS, which is obviously wrong.
> > 
> > So let's only do that when it is advertised to the guest.
> > Which means we need to rely on a per-vcpu HCRX_EL2 shadow register.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_arm.h        | 4 +---
> >  arch/arm64/include/asm/kvm_host.h       | 1 +
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +-
> >  arch/arm64/kvm/sys_regs.c               | 8 ++++++++
> >  4 files changed, 11 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> > index 3c6f8ba1e479..a1769e415d72 100644
> > --- a/arch/arm64/include/asm/kvm_arm.h
> > +++ b/arch/arm64/include/asm/kvm_arm.h
> > @@ -102,9 +102,7 @@
> >  #define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
> >  #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
> >  
> > -#define HCRX_GUEST_FLAGS \
> > -	(HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \
> > -	 (cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0))
> > +#define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En)
> >  #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En)
> >  
> >  /* TCR_EL2 Registers bits */
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index fe5ed4bcded0..22343354db3e 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -584,6 +584,7 @@ struct kvm_vcpu_arch {
> >  
> >  	/* Values of trap registers for the guest. */
> >  	u64 hcr_el2;
> > +	u64 hcrx_el2;
> >  	u64 mdcr_el2;
> >  	u64 cptr_el2;
> >  
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 2d5891518006..e3fcf8c4d5b4 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -236,7 +236,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
> >  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
> >  
> >  	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> > -		u64 hcrx = HCRX_GUEST_FLAGS;
> > +		u64 hcrx = vcpu->arch.hcrx_el2;
> >  		if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
> >  			u64 clr = 0, set = 0;
> >  
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index afe6975fcf5c..b7977e08e4ef 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -3952,6 +3952,14 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
> >  	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> >  		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
> >  
> > +	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> > +		vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS;
> > +
> > +		if (kvm_has_feat(kern_hyp_va(vcpu->kvm),
> 
> Not sure if the use of kern_hyp_va is intentional, seems out of place since we
> use the bare `kvm` variable everyone else.

My conclusion is that it's a mistake (kvm-arm.mode=nvhe):

[ 2707.523935] Unable to handle kernel paging request at virtual address 0000d34801b3fdb0                                                                      
[ 2707.523945] Mem abort info:                                                                                                                                 
[ 2707.523951]   ESR = 0x0000000096000004                                                                                                                      
[ 2707.523957]   EC = 0x25: DABT (current EL), IL = 32 bits                                                                                                    
[ 2707.523966]   SET = 0, FnV = 0                                                                                                                              
[ 2707.523973]   EA = 0, S1PTW = 0                                                                                                                             
[ 2707.523980]   FSC = 0x04: level 0 translation fault                                                                                                         
[ 2707.523988] Data abort info:                                                                                                                                
[ 2707.523993]   ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000
[ 2707.524001]   CM = 0, WnR = 0, TnD = 0, TagAccess = 0
[ 2707.524010]   GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
[ 2707.524019] user pgtable: 4k pages, 48-bit VAs, pgdp=000000088341d000
[ 2707.524029] [0000d34801b3fdb0] pgd=0000000000000000, p4d=0000000000000000
[ 2707.524043] Internal error: Oops: 0000000096000004 [#1] PREEMPT SMP
[ 2707.524053] Modules linked in:
[ 2707.524060] CPU: 0 PID: 95 Comm: kvm-vcpu-0 Tainted: G                T  6.8.0-rc1-asahi+ #4542 a70fa90dc88a9bc3f39943d7335081d8cc583f45
[ 2707.524076] Hardware name: FVP Base RevC (DT)
[ 2707.524083] pstate: 141402005 (nZcv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--) 
[ 2707.524096] pc : kvm_init_sysreg+0x100/0x398
[ 2707.524106] lr : kvm_init_sysreg+0xd8/0x398
[ 2707.524116] sp : ffff8000808e3a60
[ 2707.524123] x29: ffff8000808e3a60 x28: ffff000802f0d640 x27: 0000000000000001
[ 2707.524141] x26: 0000000000000000 x25: ffff000802f30000 x24: 0000000000000000
[ 2707.524159] x23: ffff000800189100 x22: ffff000802f0d640 x21: ffff000801b3fbc8
[ 2707.524177] x20: ffff000802f30000 x19: ffff000801b3f000 x18: ffffffffffffffff
[ 2707.524195] x17: 0000000000000000 x16: 0000000000000000 x15: ffff8001008e36e7
[ 2707.524213] x14: 0000000000000000 x13: ffffd7c64c841608 x12: 0000000000000537
[ 2707.524230] x11: 00000000000001bd x10: ffffd7c64c899608 x9 : ffffd7c64c841608
[ 2707.524248] x8 : 00000000ffffefff x7 : ffffd7c64c899608 x6 : 0000000000000000
[ 2707.524266] x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000
[ 2707.524283] x2 : ffff000802f0d640 x1 : 0000000000004020 x0 : 0000d34801b3f000
[ 2707.524301] Call trace:
[ 2707.524306]  kvm_init_sysreg+0x100/0x398
[ 2707.524316]  kvm_arch_vcpu_run_pid_change+0xe8/0x3f4
[ 2707.524330]  kvm_vcpu_ioctl+0x878/0x944
[ 2707.524341]  __arm64_sys_ioctl+0x404/0xc68

Thanks,
Joey

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest
  2024-01-25 16:25     ` Joey Gouly
@ 2024-01-26  9:17       ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-26  9:17 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Thu, 25 Jan 2024 16:25:38 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> On Mon, Jan 22, 2024 at 08:18:51PM +0000, Marc Zyngier wrote:
> > We unconditionally enable FEAT_MOPS, which is obviously wrong.
> > 
> > So let's only do that when it is advertised to the guest.
> > Which means we need to rely on a per-vcpu HCRX_EL2 shadow register.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_arm.h        | 4 +---
> >  arch/arm64/include/asm/kvm_host.h       | 1 +
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +-
> >  arch/arm64/kvm/sys_regs.c               | 8 ++++++++
> >  4 files changed, 11 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> > index 3c6f8ba1e479..a1769e415d72 100644
> > --- a/arch/arm64/include/asm/kvm_arm.h
> > +++ b/arch/arm64/include/asm/kvm_arm.h
> > @@ -102,9 +102,7 @@
> >  #define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
> >  #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
> >  
> > -#define HCRX_GUEST_FLAGS \
> > -	(HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \
> > -	 (cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0))
> > +#define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En)
> >  #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En)
> >  
> >  /* TCR_EL2 Registers bits */
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index fe5ed4bcded0..22343354db3e 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -584,6 +584,7 @@ struct kvm_vcpu_arch {
> >  
> >  	/* Values of trap registers for the guest. */
> >  	u64 hcr_el2;
> > +	u64 hcrx_el2;
> >  	u64 mdcr_el2;
> >  	u64 cptr_el2;
> >  
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 2d5891518006..e3fcf8c4d5b4 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -236,7 +236,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
> >  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
> >  
> >  	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> > -		u64 hcrx = HCRX_GUEST_FLAGS;
> > +		u64 hcrx = vcpu->arch.hcrx_el2;
> >  		if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
> >  			u64 clr = 0, set = 0;
> >  
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index afe6975fcf5c..b7977e08e4ef 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -3952,6 +3952,14 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
> >  	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> >  		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
> >  
> > +	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> > +		vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS;
> > +
> > +		if (kvm_has_feat(kern_hyp_va(vcpu->kvm),
> 
> Not sure if the use of kern_hyp_va is intentional, seems out of place since we
> use the bare `kvm` variable everyone else.

That's totally wrong. No idea where that came from...

Thanks for spotting it!

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest
@ 2024-01-26  9:17       ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-26  9:17 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Thu, 25 Jan 2024 16:25:38 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> On Mon, Jan 22, 2024 at 08:18:51PM +0000, Marc Zyngier wrote:
> > We unconditionally enable FEAT_MOPS, which is obviously wrong.
> > 
> > So let's only do that when it is advertised to the guest.
> > Which means we need to rely on a per-vcpu HCRX_EL2 shadow register.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_arm.h        | 4 +---
> >  arch/arm64/include/asm/kvm_host.h       | 1 +
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +-
> >  arch/arm64/kvm/sys_regs.c               | 8 ++++++++
> >  4 files changed, 11 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> > index 3c6f8ba1e479..a1769e415d72 100644
> > --- a/arch/arm64/include/asm/kvm_arm.h
> > +++ b/arch/arm64/include/asm/kvm_arm.h
> > @@ -102,9 +102,7 @@
> >  #define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
> >  #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
> >  
> > -#define HCRX_GUEST_FLAGS \
> > -	(HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \
> > -	 (cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0))
> > +#define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En)
> >  #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En)
> >  
> >  /* TCR_EL2 Registers bits */
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index fe5ed4bcded0..22343354db3e 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -584,6 +584,7 @@ struct kvm_vcpu_arch {
> >  
> >  	/* Values of trap registers for the guest. */
> >  	u64 hcr_el2;
> > +	u64 hcrx_el2;
> >  	u64 mdcr_el2;
> >  	u64 cptr_el2;
> >  
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 2d5891518006..e3fcf8c4d5b4 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -236,7 +236,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
> >  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
> >  
> >  	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> > -		u64 hcrx = HCRX_GUEST_FLAGS;
> > +		u64 hcrx = vcpu->arch.hcrx_el2;
> >  		if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
> >  			u64 clr = 0, set = 0;
> >  
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index afe6975fcf5c..b7977e08e4ef 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -3952,6 +3952,14 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
> >  	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> >  		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
> >  
> > +	if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> > +		vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS;
> > +
> > +		if (kvm_has_feat(kern_hyp_va(vcpu->kvm),
> 
> Not sure if the use of kern_hyp_va is intentional, seems out of place since we
> use the bare `kvm` variable everyone else.

That's totally wrong. No idea where that came from...

Thanks for spotting it!

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 02/25] KVM: arm64: Add feature checking helpers
  2024-01-22 20:18   ` Marc Zyngier
@ 2024-01-26 19:05     ` Oliver Upton
  -1 siblings, 0 replies; 114+ messages in thread
From: Oliver Upton @ 2024-01-26 19:05 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Zenghui Yu, Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

On Mon, Jan 22, 2024 at 08:18:29PM +0000, Marc Zyngier wrote:
> In order to make it easier to check whether a particular feature
> is exposed to a guest, add a new set of helpers, with kvm_has_feat()
> being the most useful.
> 
> Follow-up work will make heavy use of these.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

I very much like the way these helpers appear to work. However, I
noticed there are still a few places where we are doing explicit feature
checks against register values instead of using the macros, did you want
to address these?

Using kvm_has_feat() consistently in KVM will hopefully drive the point
home that this is the way we want to see things done going forward.

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 3d9467ff73bc..925522470b2b 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -64,12 +64,11 @@ u64 kvm_pmu_evtyper_mask(struct kvm *kvm)
 {
 	u64 mask = ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 |
 		   kvm_pmu_event_mask(kvm);
-	u64 pfr0 = IDREG(kvm, SYS_ID_AA64PFR0_EL1);
 
-	if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL2, pfr0))
+	if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP))
 		mask |= ARMV8_PMU_INCLUDE_EL2;
 
-	if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr0))
+	if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP))
 		mask |= ARMV8_PMU_EXCLUDE_NS_EL0 |
 			ARMV8_PMU_EXCLUDE_NS_EL1 |
 			ARMV8_PMU_EXCLUDE_EL3;
@@ -83,8 +82,10 @@ u64 kvm_pmu_evtyper_mask(struct kvm *kvm)
  */
 static bool kvm_pmc_is_64bit(struct kvm_pmc *pmc)
 {
+	struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
+
 	return (pmc->idx == ARMV8_PMU_CYCLE_IDX ||
-		kvm_pmu_is_3p5(kvm_pmc_to_vcpu(pmc)));
+		kvm_has_feat(vcpu->kvm, ID_AA64DFR0_EL1, PMUVer, V3P5));
 }
 
 static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc)
@@ -556,7 +557,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 		return;
 
 	/* Fixup PMCR_EL0 to reconcile the PMU version and the LP bit */
-	if (!kvm_pmu_is_3p5(vcpu))
+	if (!kvm_has_feat(vcpu->kvm, ID_AA64DFR0_EL1, PMUVer, V3P5))
 		val &= ~ARMV8_PMU_PMCR_LP;
 
 	/* The reset bits don't indicate any state, and shouldn't be saved. */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 30253bd19917..955eb06f821d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -505,10 +505,9 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
 			  struct sys_reg_params *p,
 			  const struct sys_reg_desc *r)
 {
-	u64 val = IDREG(vcpu->kvm, SYS_ID_AA64MMFR1_EL1);
 	u32 sr = reg_to_encoding(r);
 
-	if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) {
+	if (!kvm_has_feat(vcpu->kvm, ID_AA64MMFR1_EL1, LO, IMP)) {
 		kvm_inject_undefined(vcpu);
 		return false;
 	}
@@ -2737,8 +2736,7 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
 		return ignore_write(vcpu, p);
 	} else {
 		u64 dfr = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
-		u64 pfr = IDREG(vcpu->kvm, SYS_ID_AA64PFR0_EL1);
-		u32 el3 = !!SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr);
+		u32 el3 = kvm_has_feat(vcpu->kvm, ID_AA64PFR0_EL1, EL3, IMP);
 
 		p->regval = ((SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, dfr) << 28) |
 			     (SYS_FIELD_GET(ID_AA64DFR0_EL1, BRPs, dfr) << 24) |
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 4b9d8fb393a8..eb4c369a79eb 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -90,16 +90,6 @@ void kvm_vcpu_pmu_resync_el0(void);
 			vcpu->arch.pmu.events = *kvm_get_pmu_events();	\
 	} while (0)
 
-/*
- * Evaluates as true when emulating PMUv3p5, and false otherwise.
- */
-#define kvm_pmu_is_3p5(vcpu) ({						\
-	u64 val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);		\
-	u8 pmuver = SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, val);	\
-									\
-	pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P5;				\
-})
-
 u8 kvm_arm_pmu_get_pmuver_limit(void);
 u64 kvm_pmu_evtyper_mask(struct kvm *kvm);
 int kvm_arm_set_default_pmu(struct kvm *kvm);
@@ -168,7 +158,6 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 }
 
 #define kvm_vcpu_has_pmu(vcpu)		({ false; })
-#define kvm_pmu_is_3p5(vcpu)		({ false; })
 static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}

-- 
Thanks,
Oliver

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* Re: [PATCH 02/25] KVM: arm64: Add feature checking helpers
@ 2024-01-26 19:05     ` Oliver Upton
  0 siblings, 0 replies; 114+ messages in thread
From: Oliver Upton @ 2024-01-26 19:05 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Zenghui Yu, Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

On Mon, Jan 22, 2024 at 08:18:29PM +0000, Marc Zyngier wrote:
> In order to make it easier to check whether a particular feature
> is exposed to a guest, add a new set of helpers, with kvm_has_feat()
> being the most useful.
> 
> Follow-up work will make heavy use of these.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

I very much like the way these helpers appear to work. However, I
noticed there are still a few places where we are doing explicit feature
checks against register values instead of using the macros, did you want
to address these?

Using kvm_has_feat() consistently in KVM will hopefully drive the point
home that this is the way we want to see things done going forward.

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 3d9467ff73bc..925522470b2b 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -64,12 +64,11 @@ u64 kvm_pmu_evtyper_mask(struct kvm *kvm)
 {
 	u64 mask = ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 |
 		   kvm_pmu_event_mask(kvm);
-	u64 pfr0 = IDREG(kvm, SYS_ID_AA64PFR0_EL1);
 
-	if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL2, pfr0))
+	if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP))
 		mask |= ARMV8_PMU_INCLUDE_EL2;
 
-	if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr0))
+	if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP))
 		mask |= ARMV8_PMU_EXCLUDE_NS_EL0 |
 			ARMV8_PMU_EXCLUDE_NS_EL1 |
 			ARMV8_PMU_EXCLUDE_EL3;
@@ -83,8 +82,10 @@ u64 kvm_pmu_evtyper_mask(struct kvm *kvm)
  */
 static bool kvm_pmc_is_64bit(struct kvm_pmc *pmc)
 {
+	struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
+
 	return (pmc->idx == ARMV8_PMU_CYCLE_IDX ||
-		kvm_pmu_is_3p5(kvm_pmc_to_vcpu(pmc)));
+		kvm_has_feat(vcpu->kvm, ID_AA64DFR0_EL1, PMUVer, V3P5));
 }
 
 static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc)
@@ -556,7 +557,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 		return;
 
 	/* Fixup PMCR_EL0 to reconcile the PMU version and the LP bit */
-	if (!kvm_pmu_is_3p5(vcpu))
+	if (!kvm_has_feat(vcpu->kvm, ID_AA64DFR0_EL1, PMUVer, V3P5))
 		val &= ~ARMV8_PMU_PMCR_LP;
 
 	/* The reset bits don't indicate any state, and shouldn't be saved. */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 30253bd19917..955eb06f821d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -505,10 +505,9 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
 			  struct sys_reg_params *p,
 			  const struct sys_reg_desc *r)
 {
-	u64 val = IDREG(vcpu->kvm, SYS_ID_AA64MMFR1_EL1);
 	u32 sr = reg_to_encoding(r);
 
-	if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) {
+	if (!kvm_has_feat(vcpu->kvm, ID_AA64MMFR1_EL1, LO, IMP)) {
 		kvm_inject_undefined(vcpu);
 		return false;
 	}
@@ -2737,8 +2736,7 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
 		return ignore_write(vcpu, p);
 	} else {
 		u64 dfr = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
-		u64 pfr = IDREG(vcpu->kvm, SYS_ID_AA64PFR0_EL1);
-		u32 el3 = !!SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr);
+		u32 el3 = kvm_has_feat(vcpu->kvm, ID_AA64PFR0_EL1, EL3, IMP);
 
 		p->regval = ((SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, dfr) << 28) |
 			     (SYS_FIELD_GET(ID_AA64DFR0_EL1, BRPs, dfr) << 24) |
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 4b9d8fb393a8..eb4c369a79eb 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -90,16 +90,6 @@ void kvm_vcpu_pmu_resync_el0(void);
 			vcpu->arch.pmu.events = *kvm_get_pmu_events();	\
 	} while (0)
 
-/*
- * Evaluates as true when emulating PMUv3p5, and false otherwise.
- */
-#define kvm_pmu_is_3p5(vcpu) ({						\
-	u64 val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);		\
-	u8 pmuver = SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, val);	\
-									\
-	pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P5;				\
-})
-
 u8 kvm_arm_pmu_get_pmuver_limit(void);
 u64 kvm_pmu_evtyper_mask(struct kvm *kvm);
 int kvm_arm_set_default_pmu(struct kvm *kvm);
@@ -168,7 +158,6 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 }
 
 #define kvm_vcpu_has_pmu(vcpu)		({ false; })
-#define kvm_pmu_is_3p5(vcpu)		({ false; })
 static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}

-- 
Thanks,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 114+ messages in thread

* Re: [PATCH 02/25] KVM: arm64: Add feature checking helpers
  2024-01-26 19:05     ` Oliver Upton
@ 2024-01-30 12:12       ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-30 12:12 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Zenghui Yu, Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

On Fri, 26 Jan 2024 19:05:47 +0000,
Oliver Upton <oliver.upton@linux.dev> wrote:
> 
> On Mon, Jan 22, 2024 at 08:18:29PM +0000, Marc Zyngier wrote:
> > In order to make it easier to check whether a particular feature
> > is exposed to a guest, add a new set of helpers, with kvm_has_feat()
> > being the most useful.
> > 
> > Follow-up work will make heavy use of these.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> 
> I very much like the way these helpers appear to work. However, I
> noticed there are still a few places where we are doing explicit feature
> checks against register values instead of using the macros, did you want
> to address these?

Eventually, yes. It is just that there is a lot to do and I wanted to
focus on the VM runtime configuration.

> 
> Using kvm_has_feat() consistently in KVM will hopefully drive the point
> home that this is the way we want to see things done going forward.

Absolutely. I'll add these to the series.

Thanks!

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 02/25] KVM: arm64: Add feature checking helpers
@ 2024-01-30 12:12       ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-30 12:12 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Zenghui Yu, Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown

On Fri, 26 Jan 2024 19:05:47 +0000,
Oliver Upton <oliver.upton@linux.dev> wrote:
> 
> On Mon, Jan 22, 2024 at 08:18:29PM +0000, Marc Zyngier wrote:
> > In order to make it easier to check whether a particular feature
> > is exposed to a guest, add a new set of helpers, with kvm_has_feat()
> > being the most useful.
> > 
> > Follow-up work will make heavy use of these.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> 
> I very much like the way these helpers appear to work. However, I
> noticed there are still a few places where we are doing explicit feature
> checks against register values instead of using the macros, did you want
> to address these?

Eventually, yes. It is just that there is a lot to do and I wanted to
focus on the VM runtime configuration.

> 
> Using kvm_has_feat() consistently in KVM will hopefully drive the point
> home that this is the way we want to see things done going forward.

Absolutely. I'll add these to the series.

Thanks!

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 16/25] KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap()
  2024-01-24 16:57     ` Joey Gouly
@ 2024-01-30 12:43       ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-30 12:43 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Wed, 24 Jan 2024 16:57:20 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> > +bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index);
> > +
> >  #define AA32(_x)	.aarch32_map = AA32_##_x
> >  #define Op0(_x) 	.Op0 = _x
> >  #define Op1(_x) 	.Op1 = _x
> 
> It's strange having triage_sysreg_trap() in emulate-nested.c, but moving that
> would be churn for little benefit. Maybe once NV is all in.

My plan is to rename this to something like 'exception-routing.c',
because that's what it is all about. Thoughts?

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 16/25] KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap()
@ 2024-01-30 12:43       ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-30 12:43 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Wed, 24 Jan 2024 16:57:20 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> > +bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index);
> > +
> >  #define AA32(_x)	.aarch32_map = AA32_##_x
> >  #define Op0(_x) 	.Op0 = _x
> >  #define Op1(_x) 	.Op1 = _x
> 
> It's strange having triage_sysreg_trap() in emulate-nested.c, but moving that
> would be churn for little benefit. Maybe once NV is all in.

My plan is to rename this to something like 'exception-routing.c',
because that's what it is all about. Thoughts?

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 21/25] KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest
  2024-01-25 13:30     ` Joey Gouly
@ 2024-01-30 12:46       ` Marc Zyngier
  -1 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-30 12:46 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Thu, 25 Jan 2024 13:30:47 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> Question,
> 
> On Mon, Jan 22, 2024 at 08:18:48PM +0000, Marc Zyngier wrote:
> > Outer Shareable and Range TLBI instructions shouldn't be made available
> > to the guest if they are not advertised. Use FGU to disable those,
> > and set HCR_EL2.TLBIOS in the case the host doesn't have FGT.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++++++
> >  1 file changed, 29 insertions(+)
> > 
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index a62efd8a2959..3c939ea4a28f 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -3949,6 +3949,9 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
> >  
> >  	mutex_lock(&kvm->arch.config_lock);
> >  
> > +	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> > +		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
> > +
> >  	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
> >  		goto out;
> >  
> > @@ -3961,6 +3964,32 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
> >  				       HFGxTR_EL2_nSMPRI_EL1_MASK	|
> >  				       HFGxTR_EL2_nTPIDR2_EL0_MASK);
> >  
> > +	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> > +		kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS|
> > +						HFGITR_EL2_TLBIRVALE1OS	|
> > +						HFGITR_EL2_TLBIRVAAE1OS	|
> > +						HFGITR_EL2_TLBIRVAE1OS	|
> > +						HFGITR_EL2_TLBIVAALE1OS	|
> > +						HFGITR_EL2_TLBIVALE1OS	|
> > +						HFGITR_EL2_TLBIVAAE1OS	|
> > +						HFGITR_EL2_TLBIASIDE1OS	|
> > +						HFGITR_EL2_TLBIVAE1OS	|
> > +						HFGITR_EL2_TLBIVMALLE1OS);
> > +
> > +	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
> > +		kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1	|
> > +						HFGITR_EL2_TLBIRVALE1	|
> > +						HFGITR_EL2_TLBIRVAAE1	|
> > +						HFGITR_EL2_TLBIRVAE1	|
> > +						HFGITR_EL2_TLBIRVAALE1IS|
> > +						HFGITR_EL2_TLBIRVALE1IS	|
> > +						HFGITR_EL2_TLBIRVAAE1IS	|
> > +						HFGITR_EL2_TLBIRVAE1IS	|
> > +						HFGITR_EL2_TLBIRVAALE1OS|
> > +						HFGITR_EL2_TLBIRVALE1OS	|
> > +						HFGITR_EL2_TLBIRVAAE1OS	|
> > +						HFGITR_EL2_TLBIRVAE1OS);
> > +
> >  	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
> >  out:
> >  	mutex_unlock(&kvm->arch.config_lock);
> 
> I think I'm right in saying..
> 
> If the VM is running on a platform with ID_AA64ISAR0_EL1.TLB=0b010 (Outer
> Shareable and TLB range maintenance instructions are implemented.) but without
> support for FEAT_FGT, and the VMM sets the ID reg to ID_AA64ISAR0_EL1.TLB=0,
> this change will trap the TLBI *OS instructions but not the following: 
> 
>     TLBI RVAAE1
>     TLBI RVAAE1IS
>     TLBI RVAALE1
>     TLBI RVAALE1IS
>     TLBI RVAE1
>     TLBI RVAE1IS
>     TLBI RVALE1
>     TLBI RVALE1IS
> 
> These TLB range instructions only trap with HCR_EL2.TTLB, however that traps
> all TLB instructions. You may have left this off intentionally, if so can you
> add something to the commit message.

You got it exactly right. Setting TTLB would be overkill, and is one
of the cases where we have to strike a balance between efficiency and
enforcement. I'll leave a comment to that effect to capture this.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 114+ messages in thread

* Re: [PATCH 21/25] KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest
@ 2024-01-30 12:46       ` Marc Zyngier
  0 siblings, 0 replies; 114+ messages in thread
From: Marc Zyngier @ 2024-01-30 12:46 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
	Mark Brown

On Thu, 25 Jan 2024 13:30:47 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> Question,
> 
> On Mon, Jan 22, 2024 at 08:18:48PM +0000, Marc Zyngier wrote:
> > Outer Shareable and Range TLBI instructions shouldn't be made available
> > to the guest if they are not advertised. Use FGU to disable those,
> > and set HCR_EL2.TLBIOS in the case the host doesn't have FGT.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++++++
> >  1 file changed, 29 insertions(+)
> > 
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index a62efd8a2959..3c939ea4a28f 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -3949,6 +3949,9 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
> >  
> >  	mutex_lock(&kvm->arch.config_lock);
> >  
> > +	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> > +		vcpu->arch.hcr_el2 |= HCR_TTLBOS;
> > +
> >  	if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
> >  		goto out;
> >  
> > @@ -3961,6 +3964,32 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
> >  				       HFGxTR_EL2_nSMPRI_EL1_MASK	|
> >  				       HFGxTR_EL2_nTPIDR2_EL0_MASK);
> >  
> > +	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> > +		kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS|
> > +						HFGITR_EL2_TLBIRVALE1OS	|
> > +						HFGITR_EL2_TLBIRVAAE1OS	|
> > +						HFGITR_EL2_TLBIRVAE1OS	|
> > +						HFGITR_EL2_TLBIVAALE1OS	|
> > +						HFGITR_EL2_TLBIVALE1OS	|
> > +						HFGITR_EL2_TLBIVAAE1OS	|
> > +						HFGITR_EL2_TLBIASIDE1OS	|
> > +						HFGITR_EL2_TLBIVAE1OS	|
> > +						HFGITR_EL2_TLBIVMALLE1OS);
> > +
> > +	if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
> > +		kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1	|
> > +						HFGITR_EL2_TLBIRVALE1	|
> > +						HFGITR_EL2_TLBIRVAAE1	|
> > +						HFGITR_EL2_TLBIRVAE1	|
> > +						HFGITR_EL2_TLBIRVAALE1IS|
> > +						HFGITR_EL2_TLBIRVALE1IS	|
> > +						HFGITR_EL2_TLBIRVAAE1IS	|
> > +						HFGITR_EL2_TLBIRVAE1IS	|
> > +						HFGITR_EL2_TLBIRVAALE1OS|
> > +						HFGITR_EL2_TLBIRVALE1OS	|
> > +						HFGITR_EL2_TLBIRVAAE1OS	|
> > +						HFGITR_EL2_TLBIRVAE1OS);
> > +
> >  	set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
> >  out:
> >  	mutex_unlock(&kvm->arch.config_lock);
> 
> I think I'm right in saying..
> 
> If the VM is running on a platform with ID_AA64ISAR0_EL1.TLB=0b010 (Outer
> Shareable and TLB range maintenance instructions are implemented.) but without
> support for FEAT_FGT, and the VMM sets the ID reg to ID_AA64ISAR0_EL1.TLB=0,
> this change will trap the TLBI *OS instructions but not the following: 
> 
>     TLBI RVAAE1
>     TLBI RVAAE1IS
>     TLBI RVAALE1
>     TLBI RVAALE1IS
>     TLBI RVAE1
>     TLBI RVAE1IS
>     TLBI RVALE1
>     TLBI RVALE1IS
> 
> These TLB range instructions only trap with HCR_EL2.TTLB, however that traps
> all TLB instructions. You may have left this off intentionally, if so can you
> add something to the commit message.

You got it exactly right. Setting TTLB would be overkill, and is one
of the cases where we have to strike a balance between efficiency and
enforcement. I'll leave a comment to that effect to capture this.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 114+ messages in thread

end of thread, other threads:[~2024-01-30 12:46 UTC | newest]

Thread overview: 114+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-22 20:18 [PATCH 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
2024-01-22 20:18 ` Marc Zyngier
2024-01-22 20:18 ` [PATCH 01/25] arm64: sysreg: Add missing ID_AA64ISAR[13]_EL1 fields and variants Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-22 21:29   ` Mark Brown
2024-01-22 21:29     ` Mark Brown
2024-01-22 20:18 ` [PATCH 02/25] KVM: arm64: Add feature checking helpers Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-26 19:05   ` Oliver Upton
2024-01-26 19:05     ` Oliver Upton
2024-01-30 12:12     ` Marc Zyngier
2024-01-30 12:12       ` Marc Zyngier
2024-01-22 20:18 ` [PATCH 03/25] KVM: arm64: nv: Add sanitising to VNCR-backed sysregs Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-23 13:48   ` Joey Gouly
2024-01-23 13:48     ` Joey Gouly
2024-01-23 17:33     ` Marc Zyngier
2024-01-23 17:33       ` Marc Zyngier
2024-01-22 20:18 ` [PATCH 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-22 20:18 ` [PATCH 05/25] KVM: arm64: nv: Add sanitising to VNCR-backed FGT sysregs Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-22 20:18 ` [PATCH 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2 Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-22 20:18 ` [PATCH 07/25] KVM: arm64: nv: Drop sanitised_sys_reg() helper Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-23 14:01   ` Joey Gouly
2024-01-23 14:01     ` Joey Gouly
2024-01-22 20:18 ` [PATCH 08/25] KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-23 14:14   ` Joey Gouly
2024-01-23 14:14     ` Joey Gouly
2024-01-23 15:03     ` Marc Zyngier
2024-01-23 15:03       ` Marc Zyngier
2024-01-23 17:42       ` Mark Brown
2024-01-23 17:42         ` Mark Brown
2024-01-22 20:18 ` [PATCH 09/25] KVM: arm64: nv: Correctly handle negative polarity FGTs Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-22 20:18 ` [PATCH 10/25] KVM: arm64: nv: Turn encoding ranges into discrete XArray stores Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-23 16:37   ` Joey Gouly
2024-01-23 16:37     ` Joey Gouly
2024-01-23 17:45     ` Marc Zyngier
2024-01-23 17:45       ` Marc Zyngier
2024-01-22 20:18 ` [PATCH 11/25] KVM: arm64: Drop the requirement for XARRAY_MULTI Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-24 15:57   ` Joey Gouly
2024-01-24 15:57     ` Joey Gouly
2024-01-22 20:18 ` [PATCH 12/25] KVM: arm64: nv: Move system instructions to their own sys_reg_desc array Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-24 16:23   ` Joey Gouly
2024-01-24 16:23     ` Joey Gouly
2024-01-22 20:18 ` [PATCH 13/25] KVM: arm64: Always populate the trap configuration xarray Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-24 16:25   ` Joey Gouly
2024-01-24 16:25     ` Joey Gouly
2024-01-22 20:18 ` [PATCH 14/25] KVM: arm64: Register AArch64 system register entries with the sysreg xarray Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-24 16:34   ` Joey Gouly
2024-01-24 16:34     ` Joey Gouly
2024-01-24 16:37     ` Marc Zyngier
2024-01-24 16:37       ` Marc Zyngier
2024-01-24 17:02       ` Joey Gouly
2024-01-24 17:02         ` Joey Gouly
2024-01-22 20:18 ` [PATCH 15/25] KVM: arm64: Use the xarray as the primary sysreg/sysinsn walker Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-24 16:48   ` Joey Gouly
2024-01-24 16:48     ` Joey Gouly
2024-01-22 20:18 ` [PATCH 16/25] KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap() Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-24 16:57   ` Joey Gouly
2024-01-24 16:57     ` Joey Gouly
2024-01-30 12:43     ` Marc Zyngier
2024-01-30 12:43       ` Marc Zyngier
2024-01-22 20:18 ` [PATCH 17/25] KVM: arm64: Add Fine-Grained UNDEF tracking information Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-22 20:18 ` [PATCH 18/25] KVM: arm64: Propagate and handle Fine-Grained UNDEF bits Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-24 15:53   ` Joey Gouly
2024-01-24 15:53     ` Joey Gouly
2024-01-22 20:18 ` [PATCH 19/25] KVM: arm64: Move existing feature disabling over to FGU infrastructure Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-24 17:16   ` Joey Gouly
2024-01-24 17:16     ` Joey Gouly
2024-01-22 20:18 ` [PATCH 20/25] KVM: arm64: Streamline save/restore of HFG[RW]TR_EL2 Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-25 11:30   ` Joey Gouly
2024-01-25 11:30     ` Joey Gouly
2024-01-22 20:18 ` [PATCH 21/25] KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-25 13:30   ` Joey Gouly
2024-01-25 13:30     ` Joey Gouly
2024-01-30 12:46     ` Marc Zyngier
2024-01-30 12:46       ` Marc Zyngier
2024-01-22 20:18 ` [PATCH 22/25] KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is " Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-23 11:48   ` Joey Gouly
2024-01-23 11:48     ` Joey Gouly
2024-01-23 17:51     ` Marc Zyngier
2024-01-23 17:51       ` Marc Zyngier
2024-01-22 20:18 ` [PATCH 23/25] KVM: arm64: Make AMU sysreg UNDEF if FEAT_AMU " Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-25 13:42   ` Joey Gouly
2024-01-25 13:42     ` Joey Gouly
2024-01-22 20:18 ` [PATCH 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if " Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier
2024-01-25 16:25   ` Joey Gouly
2024-01-25 16:25     ` Joey Gouly
2024-01-25 17:35     ` Joey Gouly
2024-01-25 17:35       ` Joey Gouly
2024-01-26  9:17     ` Marc Zyngier
2024-01-26  9:17       ` Marc Zyngier
2024-01-22 20:18 ` [PATCH 25/25] KVM: arm64: Add debugfs file for guest's ID registers Marc Zyngier
2024-01-22 20:18   ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.