kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs
@ 2021-08-17  8:11 Fuad Tabba
  2021-08-17  8:11 ` [PATCH v4 01/15] KVM: arm64: placeholder to check if VM is protected Fuad Tabba
                   ` (15 more replies)
  0 siblings, 16 replies; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Hi,

Changes since v3 [1]:
- Redid calculating restricted values of feature register fields, ensuring that
  the code distinguishes between unsigned and (potentially in the future)
  signed fields (Will)
- Refactoring and fixes (Drew, Will)
- More documentation and comments (Oliver, Will)
- Dropped patch "Restrict protected VM capabilities", since it should come with
  or after the user ABI series for pKVM (Will)
- Carried Will's acks

Changes since v2 [2]:
- Both trapping and setting of feature id registers are toggled by an allowed
  features bitmap of the feature id registers (Will)
- Documentation explaining the rationale behind allowed/blocked features (Drew)
- Restrict protected VM features by checking and restricting VM capabilities
- Misc small fixes and tidying up (mostly Will)
- Remove dependency on Will's protected VM user ABI series [3]
- Rebase on 5.14-rc2
- Carried Will's acks

Changes since v1 [4]:
- Restrict protected VM features based on an allowed features rather than
  rejected ones (Drew)
- Add more background describing protected KVM to the cover letter (Alex)

This patch series adds support for restricting CPU features for protected VMs
in KVM (pKVM) [5].

Various VM feature configurations are allowed in KVM/arm64, each requiring
specific handling logic to deal with traps, context-switching and potentially
emulation. Achieving feature parity in pKVM therefore requires either elevating
this logic to EL2 (and substantially increasing the TCB) or continuing to trust
the host handlers at EL1. Since neither of these options are especially
appealing, pKVM instead limits the CPU features exposed to a guest to a fixed
configuration based on the underlying hardware and which can mostly be provided
straightforwardly by EL2.

This series approaches that by restricting CPU features exposed to protected
guests. Features advertised through feature registers are limited, which pKVM
enforces by trapping register accesses and instructions associated with these
features.

This series is based on 5.14-rc2. You can find the applied series here [6].

Cheers,
/fuad

[1] https://lore.kernel.org/kvmarm/20210719160346.609914-1-tabba@google.com/

[2] https://lore.kernel.org/kvmarm/20210615133950.693489-1-tabba@google.com/

[3] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/

[4] https://lore.kernel.org/kvmarm/20210608141141.997398-1-tabba@google.com/

[5] Once complete, protected KVM adds the ability to create protected VMs.
These protected VMs are protected from the host Linux kernel (and from other
VMs), where the host does not have access to guest memory,even if compromised.
Normal (nVHE) guests can still be created and run in parallel with protected
VMs. Their functionality should not be affected.

For protected VMs, the host should not even have access to a protected guest's
state or anything that would enable it to manipulate it (e.g., vcpu register
context and el2 system registers); only hyp would have that access. If the host
could access that state, then it might be able to get around the protection
provided.  Therefore, anything that is sensitive and that would require such
access needs to happen at hyp, hence the code in nvhe running only at hyp.

For more details about pKVM, please refer to Will's talk at KVM Forum 2020:
https://mirrors.edge.kernel.org/pub/linux/kernel/people/will/slides/kvmforum-2020-edited.pdf
https://www.youtube.com/watch?v=edqJSzsDRxk

[6] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/el2_fixed_feature_v4

Fuad Tabba (15):
  KVM: arm64: placeholder to check if VM is protected
  KVM: arm64: Remove trailing whitespace in comment
  KVM: arm64: MDCR_EL2 is a 64-bit register
  KVM: arm64: Fix names of config register fields
  KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
  KVM: arm64: Restore mdcr_el2 from vcpu
  KVM: arm64: Keep mdcr_el2's value as set by __init_el2_debug
  KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
  KVM: arm64: Add feature register flag definitions
  KVM: arm64: Add config register bit definitions
  KVM: arm64: Guest exit handlers for nVHE hyp
  KVM: arm64: Add trap handlers for protected VMs
  KVM: arm64: Move sanitized copies of CPU features
  KVM: arm64: Trap access to pVM restricted features
  KVM: arm64: Handle protected guests at 32 bits

 arch/arm64/include/asm/cpufeature.h       |   4 +-
 arch/arm64/include/asm/kvm_arm.h          |  54 ++-
 arch/arm64/include/asm/kvm_asm.h          |   2 +-
 arch/arm64/include/asm/kvm_fixed_config.h | 170 +++++++++
 arch/arm64/include/asm/kvm_host.h         |  15 +-
 arch/arm64/include/asm/kvm_hyp.h          |   5 +-
 arch/arm64/include/asm/sysreg.h           |  17 +-
 arch/arm64/kernel/cpufeature.c            |   8 +-
 arch/arm64/kvm/Makefile                   |   2 +-
 arch/arm64/kvm/arm.c                      |  12 +
 arch/arm64/kvm/debug.c                    |   2 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h   |  52 ++-
 arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c        |   2 +-
 arch/arm64/kvm/hyp/nvhe/mem_protect.c     |   6 -
 arch/arm64/kvm/hyp/nvhe/switch.c          |  87 ++++-
 arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 432 ++++++++++++++++++++++
 arch/arm64/kvm/hyp/vhe/debug-sr.c         |   2 +-
 arch/arm64/kvm/hyp/vhe/switch.c           |  12 +-
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c        |   2 +-
 arch/arm64/kvm/pkvm.c                     | 185 +++++++++
 arch/arm64/kvm/sys_regs.c                 |  64 +---
 arch/arm64/kvm/sys_regs.h                 |  31 ++
 23 files changed, 1059 insertions(+), 109 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
 create mode 100644 arch/arm64/kvm/pkvm.c


base-commit: c500bee1c5b2f1d59b1081ac879d73268ab0ff17
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v4 01/15] KVM: arm64: placeholder to check if VM is protected
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-17  8:11 ` [PATCH v4 02/15] KVM: arm64: Remove trailing whitespace in comment Fuad Tabba
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Add a function to check whether a VM is protected (under pKVM).
Since the creation of protected VMs isn't enabled yet, this is a
placeholder that always returns false. The intention is for this
to become a check for protected VMs in the future (see Will's RFC).

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>

Link: https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/
---
 arch/arm64/include/asm/kvm_host.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 41911585ae0c..347781f99b6a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -771,6 +771,11 @@ void kvm_arch_free_vm(struct kvm *kvm);
 
 int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type);
 
+static inline bool kvm_vm_is_protected(struct kvm *kvm)
+{
+	return false;
+}
+
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 02/15] KVM: arm64: Remove trailing whitespace in comment
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
  2021-08-17  8:11 ` [PATCH v4 01/15] KVM: arm64: placeholder to check if VM is protected Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-17  8:11 ` [PATCH v4 03/15] KVM: arm64: MDCR_EL2 is a 64-bit register Fuad Tabba
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Remove trailing whitespace from comment in trap_dbgauthstatus_el1().

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/sys_regs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f6f126eb6ac1..80a6e41cadad 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -318,14 +318,14 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 /*
  * We want to avoid world-switching all the DBG registers all the
  * time:
- * 
+ *
  * - If we've touched any debug register, it is likely that we're
  *   going to touch more of them. It then makes sense to disable the
  *   traps and start doing the save/restore dance
  * - If debug is active (DBG_MDSCR_KDE or DBG_MDSCR_MDE set), it is
  *   then mandatory to save/restore the registers, as the guest
  *   depends on them.
- * 
+ *
  * For this, we use a DIRTY bit, indicating the guest has modified the
  * debug registers, used as follow:
  *
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 03/15] KVM: arm64: MDCR_EL2 is a 64-bit register
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
  2021-08-17  8:11 ` [PATCH v4 01/15] KVM: arm64: placeholder to check if VM is protected Fuad Tabba
  2021-08-17  8:11 ` [PATCH v4 02/15] KVM: arm64: Remove trailing whitespace in comment Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-18 14:32   ` Marc Zyngier
  2021-08-17  8:11 ` [PATCH v4 04/15] KVM: arm64: Fix names of config register fields Fuad Tabba
                   ` (12 subsequent siblings)
  15 siblings, 1 reply; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Fix the places in KVM that treat MDCR_EL2 as a 32-bit register.
More recent features (e.g., FEAT_SPEv1p2) use bits above 31.

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_arm.h   | 20 ++++++++++----------
 arch/arm64/include/asm/kvm_asm.h   |  2 +-
 arch/arm64/include/asm/kvm_host.h  |  2 +-
 arch/arm64/kvm/debug.c             |  2 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c |  2 +-
 arch/arm64/kvm/hyp/vhe/debug-sr.c  |  2 +-
 6 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index d436831dd706..6a523ec83415 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -281,18 +281,18 @@
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
 #define MDCR_EL2_E2TB_SHIFT	(UL(24))
-#define MDCR_EL2_TTRF		(1 << 19)
-#define MDCR_EL2_TPMS		(1 << 14)
+#define MDCR_EL2_TTRF		(UL(1) << 19)
+#define MDCR_EL2_TPMS		(UL(1) << 14)
 #define MDCR_EL2_E2PB_MASK	(UL(0x3))
 #define MDCR_EL2_E2PB_SHIFT	(UL(12))
-#define MDCR_EL2_TDRA		(1 << 11)
-#define MDCR_EL2_TDOSA		(1 << 10)
-#define MDCR_EL2_TDA		(1 << 9)
-#define MDCR_EL2_TDE		(1 << 8)
-#define MDCR_EL2_HPME		(1 << 7)
-#define MDCR_EL2_TPM		(1 << 6)
-#define MDCR_EL2_TPMCR		(1 << 5)
-#define MDCR_EL2_HPMN_MASK	(0x1F)
+#define MDCR_EL2_TDRA		(UL(1) << 11)
+#define MDCR_EL2_TDOSA		(UL(1) << 10)
+#define MDCR_EL2_TDA		(UL(1) << 9)
+#define MDCR_EL2_TDE		(UL(1) << 8)
+#define MDCR_EL2_HPME		(UL(1) << 7)
+#define MDCR_EL2_TPM		(UL(1) << 6)
+#define MDCR_EL2_TPMCR		(UL(1) << 5)
+#define MDCR_EL2_HPMN_MASK	(UL(0x1F))
 
 /* For compatibility with fault code shared with 32-bit */
 #define FSC_FAULT	ESR_ELx_FSC_FAULT
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 9f0bf2109be7..63ead9060ab5 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -210,7 +210,7 @@ extern u64 __vgic_v3_read_vmcr(void);
 extern void __vgic_v3_write_vmcr(u32 vmcr);
 extern void __vgic_v3_init_lrs(void);
 
-extern u32 __kvm_get_mdcr_el2(void);
+extern u64 __kvm_get_mdcr_el2(void);
 
 #define __KVM_EXTABLE(from, to)						\
 	"	.pushsection	__kvm_ex_table, \"a\"\n"		\
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 347781f99b6a..4d2d974c1522 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -289,7 +289,7 @@ struct kvm_vcpu_arch {
 
 	/* HYP configuration */
 	u64 hcr_el2;
-	u32 mdcr_el2;
+	u64 mdcr_el2;
 
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index d5e79d7ee6e9..db9361338b2a 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -21,7 +21,7 @@
 				DBG_MDSCR_KDE | \
 				DBG_MDSCR_MDE)
 
-static DEFINE_PER_CPU(u32, mdcr_el2);
+static DEFINE_PER_CPU(u64, mdcr_el2);
 
 /**
  * save/restore_guest_debug_regs
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index 7d3f25868cae..df361d839902 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -109,7 +109,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu)
 	__debug_switch_to_host_common(vcpu);
 }
 
-u32 __kvm_get_mdcr_el2(void)
+u64 __kvm_get_mdcr_el2(void)
 {
 	return read_sysreg(mdcr_el2);
 }
diff --git a/arch/arm64/kvm/hyp/vhe/debug-sr.c b/arch/arm64/kvm/hyp/vhe/debug-sr.c
index f1e2e5a00933..289689b2682d 100644
--- a/arch/arm64/kvm/hyp/vhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/debug-sr.c
@@ -20,7 +20,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu)
 	__debug_switch_to_host_common(vcpu);
 }
 
-u32 __kvm_get_mdcr_el2(void)
+u64 __kvm_get_mdcr_el2(void)
 {
 	return read_sysreg(mdcr_el2);
 }
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 04/15] KVM: arm64: Fix names of config register fields
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
                   ` (2 preceding siblings ...)
  2021-08-17  8:11 ` [PATCH v4 03/15] KVM: arm64: MDCR_EL2 is a 64-bit register Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-17  8:11 ` [PATCH v4 05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse Fuad Tabba
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Change the names of hcr_el2 register fields to match the Arm
Architecture Reference Manual. Easier for cross-referencing and
for grepping.

Also, change the name of CPTR_EL2_RES1 to CPTR_NVHE_EL2_RES1,
because res1 bits are different for VHE.

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 6a523ec83415..a928b2dc0b0f 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -32,9 +32,9 @@
 #define HCR_TVM		(UL(1) << 26)
 #define HCR_TTLB	(UL(1) << 25)
 #define HCR_TPU		(UL(1) << 24)
-#define HCR_TPC		(UL(1) << 23)
+#define HCR_TPC		(UL(1) << 23) /* HCR_TPCP if FEAT_DPB */
 #define HCR_TSW		(UL(1) << 22)
-#define HCR_TAC		(UL(1) << 21)
+#define HCR_TACR	(UL(1) << 21)
 #define HCR_TIDCP	(UL(1) << 20)
 #define HCR_TSC		(UL(1) << 19)
 #define HCR_TID3	(UL(1) << 18)
@@ -61,7 +61,7 @@
  * The bits we set in HCR:
  * TLOR:	Trap LORegion register accesses
  * RW:		64bit by default, can be overridden for 32bit VMs
- * TAC:		Trap ACTLR
+ * TACR:	Trap ACTLR
  * TSC:		Trap SMC
  * TSW:		Trap cache operations by set/way
  * TWE:		Trap WFE
@@ -76,7 +76,7 @@
  * PTW:		Take a stage2 fault if a stage1 walk steps in device memory
  */
 #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
-			 HCR_BSU_IS | HCR_FB | HCR_TAC | \
+			 HCR_BSU_IS | HCR_FB | HCR_TACR | \
 			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
 			 HCR_FMO | HCR_IMO | HCR_PTW )
 #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
@@ -275,8 +275,8 @@
 #define CPTR_EL2_TTA	(1 << 20)
 #define CPTR_EL2_TFP	(1 << CPTR_EL2_TFP_SHIFT)
 #define CPTR_EL2_TZ	(1 << 8)
-#define CPTR_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 */
-#define CPTR_EL2_DEFAULT	CPTR_EL2_RES1
+#define CPTR_NVHE_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */
+#define CPTR_EL2_DEFAULT	CPTR_NVHE_EL2_RES1
 
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
                   ` (3 preceding siblings ...)
  2021-08-17  8:11 ` [PATCH v4 04/15] KVM: arm64: Fix names of config register fields Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-17  8:11 ` [PATCH v4 06/15] KVM: arm64: Restore mdcr_el2 from vcpu Fuad Tabba
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Refactor sys_regs.h and sys_regs.c to make it easier to reuse
common code. It will be used in nVHE in a later patch.

Note that the refactored code uses __inline_bsearch for find_reg
instead of bsearch to avoid copying the bsearch code for nVHE.

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/sysreg.h |  5 +++
 arch/arm64/kvm/sys_regs.c       | 60 +++++++++------------------------
 arch/arm64/kvm/sys_regs.h       | 31 +++++++++++++++++
 3 files changed, 52 insertions(+), 44 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 7b9c3acba684..53a93a9c5253 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -1153,6 +1153,11 @@
 #define ICH_VTR_A3V_SHIFT	21
 #define ICH_VTR_A3V_MASK	(1 << ICH_VTR_A3V_SHIFT)
 
+#define ARM64_FEATURE_FIELD_BITS	4
+
+/* Create a mask for the feature bits of the specified feature. */
+#define ARM64_FEATURE_MASK(x)	(GENMASK_ULL(x##_SHIFT + ARM64_FEATURE_FIELD_BITS - 1, x##_SHIFT))
+
 #ifdef __ASSEMBLY__
 
 	.irp	num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 80a6e41cadad..b6a2f8e890db 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -44,10 +44,6 @@
  * 64bit interface.
  */
 
-#define reg_to_encoding(x)						\
-	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
-		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
-
 static bool read_from_write_only(struct kvm_vcpu *vcpu,
 				 struct sys_reg_params *params,
 				 const struct sys_reg_desc *r)
@@ -1026,8 +1022,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
 	return true;
 }
 
-#define FEATURE(x)	(GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT))
-
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
 static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 		struct sys_reg_desc const *r, bool raz)
@@ -1038,40 +1032,40 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 	switch (id) {
 	case SYS_ID_AA64PFR0_EL1:
 		if (!vcpu_has_sve(vcpu))
-			val &= ~FEATURE(ID_AA64PFR0_SVE);
-		val &= ~FEATURE(ID_AA64PFR0_AMU);
-		val &= ~FEATURE(ID_AA64PFR0_CSV2);
-		val |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2);
-		val &= ~FEATURE(ID_AA64PFR0_CSV3);
-		val |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3);
+			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_SVE);
+		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_AMU);
+		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2);
+		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2);
+		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3);
+		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3);
 		break;
 	case SYS_ID_AA64PFR1_EL1:
-		val &= ~FEATURE(ID_AA64PFR1_MTE);
+		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE);
 		if (kvm_has_mte(vcpu->kvm)) {
 			u64 pfr, mte;
 
 			pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
 			mte = cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR1_MTE_SHIFT);
-			val |= FIELD_PREP(FEATURE(ID_AA64PFR1_MTE), mte);
+			val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR1_MTE), mte);
 		}
 		break;
 	case SYS_ID_AA64ISAR1_EL1:
 		if (!vcpu_has_ptrauth(vcpu))
-			val &= ~(FEATURE(ID_AA64ISAR1_APA) |
-				 FEATURE(ID_AA64ISAR1_API) |
-				 FEATURE(ID_AA64ISAR1_GPA) |
-				 FEATURE(ID_AA64ISAR1_GPI));
+			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) |
+				 ARM64_FEATURE_MASK(ID_AA64ISAR1_API) |
+				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) |
+				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI));
 		break;
 	case SYS_ID_AA64DFR0_EL1:
 		/* Limit debug to ARMv8.0 */
-		val &= ~FEATURE(ID_AA64DFR0_DEBUGVER);
-		val |= FIELD_PREP(FEATURE(ID_AA64DFR0_DEBUGVER), 6);
+		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);
+		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), 6);
 		/* Limit guests to PMUv3 for ARMv8.4 */
 		val = cpuid_feature_cap_perfmon_field(val,
 						      ID_AA64DFR0_PMUVER_SHIFT,
 						      kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_PMUVER_8_4 : 0);
 		/* Hide SPE from guests */
-		val &= ~FEATURE(ID_AA64DFR0_PMSVER);
+		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_PMSVER);
 		break;
 	case SYS_ID_DFR0_EL1:
 		/* Limit guests to PMUv3 for ARMv8.4 */
@@ -2106,23 +2100,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n,
 	return 0;
 }
 
-static int match_sys_reg(const void *key, const void *elt)
-{
-	const unsigned long pval = (unsigned long)key;
-	const struct sys_reg_desc *r = elt;
-
-	return pval - reg_to_encoding(r);
-}
-
-static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
-					 const struct sys_reg_desc table[],
-					 unsigned int num)
-{
-	unsigned long pval = reg_to_encoding(params);
-
-	return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
-}
-
 int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu)
 {
 	kvm_inject_undefined(vcpu);
@@ -2365,13 +2342,8 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
 
 	trace_kvm_handle_sys_reg(esr);
 
-	params.Op0 = (esr >> 20) & 3;
-	params.Op1 = (esr >> 14) & 0x7;
-	params.CRn = (esr >> 10) & 0xf;
-	params.CRm = (esr >> 1) & 0xf;
-	params.Op2 = (esr >> 17) & 0x7;
+	params = esr_sys64_to_params(esr);
 	params.regval = vcpu_get_reg(vcpu, Rt);
-	params.is_write = !(esr & 1);
 
 	ret = emulate_sys_reg(vcpu, &params);
 
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 9d0621417c2a..cc0cc95a0280 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -11,6 +11,12 @@
 #ifndef __ARM64_KVM_SYS_REGS_LOCAL_H__
 #define __ARM64_KVM_SYS_REGS_LOCAL_H__
 
+#include <linux/bsearch.h>
+
+#define reg_to_encoding(x)						\
+	sys_reg((u32)(x)->Op0, (u32)(x)->Op1,				\
+		(u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2)
+
 struct sys_reg_params {
 	u8	Op0;
 	u8	Op1;
@@ -21,6 +27,14 @@ struct sys_reg_params {
 	bool	is_write;
 };
 
+#define esr_sys64_to_params(esr)                                               \
+	((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3,                    \
+				  .Op1 = ((esr) >> 14) & 0x7,                  \
+				  .CRn = ((esr) >> 10) & 0xf,                  \
+				  .CRm = ((esr) >> 1) & 0xf,                   \
+				  .Op2 = ((esr) >> 17) & 0x7,                  \
+				  .is_write = !((esr) & 1) })
+
 struct sys_reg_desc {
 	/* Sysreg string for debug */
 	const char *name;
@@ -152,6 +166,23 @@ static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
 	return i1->Op2 - i2->Op2;
 }
 
+static inline int match_sys_reg(const void *key, const void *elt)
+{
+	const unsigned long pval = (unsigned long)key;
+	const struct sys_reg_desc *r = elt;
+
+	return pval - reg_to_encoding(r);
+}
+
+static inline const struct sys_reg_desc *
+find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[],
+	 unsigned int num)
+{
+	unsigned long pval = reg_to_encoding(params);
+
+	return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
+}
+
 const struct sys_reg_desc *find_reg_by_id(u64 id,
 					  struct sys_reg_params *params,
 					  const struct sys_reg_desc table[],
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
                   ` (4 preceding siblings ...)
  2021-08-17  8:11 ` [PATCH v4 05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-18 13:13   ` Will Deacon
  2021-08-18 14:42   ` Marc Zyngier
  2021-08-17  8:11 ` [PATCH v4 07/15] KVM: arm64: Keep mdcr_el2's value as set by __init_el2_debug Fuad Tabba
                   ` (9 subsequent siblings)
  15 siblings, 2 replies; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

On deactivating traps, restore the value of mdcr_el2 from the
newly created and preserved host value vcpu context, rather than
directly reading the hardware register.

Up until and including this patch the two values are the same,
i.e., the hardware register and the vcpu one. A future patch will
be changing the value of mdcr_el2 on activating traps, and this
ensures that its value will be restored.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_host.h       |  5 ++++-
 arch/arm64/include/asm/kvm_hyp.h        |  2 +-
 arch/arm64/kvm/hyp/include/hyp/switch.h |  6 +++++-
 arch/arm64/kvm/hyp/nvhe/switch.c        | 13 +++++--------
 arch/arm64/kvm/hyp/vhe/switch.c         | 14 +++++---------
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c      |  2 +-
 6 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4d2d974c1522..76462c6a91ee 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -287,10 +287,13 @@ struct kvm_vcpu_arch {
 	/* Stage 2 paging state used by the hardware on next switch */
 	struct kvm_s2_mmu *hw_mmu;
 
-	/* HYP configuration */
+	/* Values of trap registers for the guest. */
 	u64 hcr_el2;
 	u64 mdcr_el2;
 
+	/* Values of trap registers for the host before guest entry. */
+	u64 mdcr_el2_host;
+
 	/* Exception Information */
 	struct kvm_vcpu_fault_info fault;
 
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 9d60b3006efc..657d0c94cf82 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr);
 
 #ifndef __KVM_NVHE_HYPERVISOR__
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
-void deactivate_traps_vhe_put(void);
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
 #endif
 
 u64 __guest_enter(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e4a2f295a394..a0e78a6027be 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -92,11 +92,15 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 		write_sysreg(0, pmselr_el0);
 		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
 	}
+
+	vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 }
 
-static inline void __deactivate_traps_common(void)
+static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
 {
+	write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
+
 	write_sysreg(0, hstr_el2);
 	if (kvm_arm_support_pmu_v3())
 		write_sysreg(0, pmuserenr_el0);
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index f7af9688c1f7..2ea764a48958 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -69,12 +69,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 static void __deactivate_traps(struct kvm_vcpu *vcpu)
 {
 	extern char __kvm_hyp_host_vector[];
-	u64 mdcr_el2, cptr;
+	u64 cptr;
 
 	___deactivate_traps(vcpu);
 
-	mdcr_el2 = read_sysreg(mdcr_el2);
-
 	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
 		u64 val;
 
@@ -92,13 +90,12 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 		isb();
 	}
 
-	__deactivate_traps_common();
+	vcpu->arch.mdcr_el2_host &= MDCR_EL2_HPMN_MASK |
+				    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
+				    MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
 
-	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
-	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
-	mdcr_el2 |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
+	__deactivate_traps_common(vcpu);
 
-	write_sysreg(mdcr_el2, mdcr_el2);
 	write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
 
 	cptr = CPTR_EL2_DEFAULT;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index b3229924d243..ec158fa41ae6 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -91,17 +91,13 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
 	__activate_traps_common(vcpu);
 }
 
-void deactivate_traps_vhe_put(void)
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
 {
-	u64 mdcr_el2 = read_sysreg(mdcr_el2);
+	vcpu->arch.mdcr_el2_host &= MDCR_EL2_HPMN_MASK |
+				    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
+				    MDCR_EL2_TPMS;
 
-	mdcr_el2 &= MDCR_EL2_HPMN_MASK |
-		    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
-		    MDCR_EL2_TPMS;
-
-	write_sysreg(mdcr_el2, mdcr_el2);
-
-	__deactivate_traps_common();
+	__deactivate_traps_common(vcpu);
 }
 
 /* Switch to the guest for VHE systems running in EL2 */
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 2a0b8c88d74f..007a12dd4351 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -101,7 +101,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
 	struct kvm_cpu_context *host_ctxt;
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
-	deactivate_traps_vhe_put();
+	deactivate_traps_vhe_put(vcpu);
 
 	__sysreg_save_el1_state(guest_ctxt);
 	__sysreg_save_user_state(guest_ctxt);
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 07/15] KVM: arm64: Keep mdcr_el2's value as set by __init_el2_debug
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
                   ` (5 preceding siblings ...)
  2021-08-17  8:11 ` [PATCH v4 06/15] KVM: arm64: Restore mdcr_el2 from vcpu Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-18 13:17   ` Will Deacon
  2021-08-17  8:11 ` [PATCH v4 08/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch Fuad Tabba
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

__init_el2_debug configures mdcr_el2 at initialization based on,
among other things, available hardware support. Trap deactivation
doesn't check that, so keep the initial value.

No functional change intended. However, the value of mdcr_el2
might be different after deactivating traps than it was before
this patch.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 4 ----
 arch/arm64/kvm/hyp/vhe/switch.c  | 4 ----
 2 files changed, 8 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 2ea764a48958..1778593a08a9 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -90,10 +90,6 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 		isb();
 	}
 
-	vcpu->arch.mdcr_el2_host &= MDCR_EL2_HPMN_MASK |
-				    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
-				    MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
-
 	__deactivate_traps_common(vcpu);
 
 	write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index ec158fa41ae6..0d0c9550fb08 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -93,10 +93,6 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
 
 void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.mdcr_el2_host &= MDCR_EL2_HPMN_MASK |
-				    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
-				    MDCR_EL2_TPMS;
-
 	__deactivate_traps_common(vcpu);
 }
 
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 08/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
                   ` (6 preceding siblings ...)
  2021-08-17  8:11 ` [PATCH v4 07/15] KVM: arm64: Keep mdcr_el2's value as set by __init_el2_debug Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-17  8:11 ` [PATCH v4 09/15] KVM: arm64: Add feature register flag definitions Fuad Tabba
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Track the baseline guest value for cptr_el2 in struct
kvm_vcpu_arch, similar to the other registers that control traps.
Use this value when setting cptr_el2 for the guest.

Currently this value is unchanged (CPTR_EL2_DEFAULT), but future
patches will set trapping bits based on features supported for
the guest.

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_host.h | 1 +
 arch/arm64/kvm/arm.c              | 1 +
 arch/arm64/kvm/hyp/nvhe/switch.c  | 2 +-
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 76462c6a91ee..ac67d5699c68 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -290,6 +290,7 @@ struct kvm_vcpu_arch {
 	/* Values of trap registers for the guest. */
 	u64 hcr_el2;
 	u64 mdcr_el2;
+	u64 cptr_el2;
 
 	/* Values of trap registers for the host before guest entry. */
 	u64 mdcr_el2_host;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e9a2b8f27792..14b12f2c08c0 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1104,6 +1104,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	}
 
 	vcpu_reset_hcr(vcpu);
+	vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT;
 
 	/*
 	 * Handle the "start in power-off" case.
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 1778593a08a9..86f3d6482935 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -41,7 +41,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 	___activate_traps(vcpu);
 	__activate_traps_common(vcpu);
 
-	val = CPTR_EL2_DEFAULT;
+	val = vcpu->arch.cptr_el2;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TAM;
 	if (!update_fp_enabled(vcpu)) {
 		val |= CPTR_EL2_TFP | CPTR_EL2_TZ;
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 09/15] KVM: arm64: Add feature register flag definitions
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
                   ` (7 preceding siblings ...)
  2021-08-17  8:11 ` [PATCH v4 08/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-18 13:21   ` Will Deacon
  2021-08-17  8:11 ` [PATCH v4 10/15] KVM: arm64: Add config register bit definitions Fuad Tabba
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Add feature register flag definitions to clarify which features
might be supported.

Consolidate the various ID_AA64PFR0_ELx flags for all ELs.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cpufeature.h |  4 ++--
 arch/arm64/include/asm/sysreg.h     | 12 ++++++++----
 arch/arm64/kernel/cpufeature.c      |  8 ++++----
 3 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 9bb9d11750d7..b7d9bb17908d 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -602,14 +602,14 @@ static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
 {
 	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT);
 
-	return val == ID_AA64PFR0_EL1_32BIT_64BIT;
+	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
 }
 
 static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
 {
 	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
 
-	return val == ID_AA64PFR0_EL0_32BIT_64BIT;
+	return val == ID_AA64PFR0_ELx_32BIT_64BIT;
 }
 
 static inline bool id_aa64pfr0_sve(u64 pfr0)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 53a93a9c5253..f84a00f5874d 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -784,14 +784,13 @@
 #define ID_AA64PFR0_AMU			0x1
 #define ID_AA64PFR0_SVE			0x1
 #define ID_AA64PFR0_RAS_V1		0x1
+#define ID_AA64PFR0_RAS_V1P1		0x2
 #define ID_AA64PFR0_FP_NI		0xf
 #define ID_AA64PFR0_FP_SUPPORTED	0x0
 #define ID_AA64PFR0_ASIMD_NI		0xf
 #define ID_AA64PFR0_ASIMD_SUPPORTED	0x0
-#define ID_AA64PFR0_EL1_64BIT_ONLY	0x1
-#define ID_AA64PFR0_EL1_32BIT_64BIT	0x2
-#define ID_AA64PFR0_EL0_64BIT_ONLY	0x1
-#define ID_AA64PFR0_EL0_32BIT_64BIT	0x2
+#define ID_AA64PFR0_ELx_64BIT_ONLY	0x1
+#define ID_AA64PFR0_ELx_32BIT_64BIT	0x2
 
 /* id_aa64pfr1 */
 #define ID_AA64PFR1_MPAMFRAC_SHIFT	16
@@ -847,12 +846,16 @@
 #define ID_AA64MMFR0_ASID_SHIFT		4
 #define ID_AA64MMFR0_PARANGE_SHIFT	0
 
+#define ID_AA64MMFR0_ASID_8		0x0
+#define ID_AA64MMFR0_ASID_16		0x2
+
 #define ID_AA64MMFR0_TGRAN4_NI		0xf
 #define ID_AA64MMFR0_TGRAN4_SUPPORTED	0x0
 #define ID_AA64MMFR0_TGRAN64_NI		0xf
 #define ID_AA64MMFR0_TGRAN64_SUPPORTED	0x0
 #define ID_AA64MMFR0_TGRAN16_NI		0x0
 #define ID_AA64MMFR0_TGRAN16_SUPPORTED	0x1
+#define ID_AA64MMFR0_PARANGE_40		0x2
 #define ID_AA64MMFR0_PARANGE_48		0x5
 #define ID_AA64MMFR0_PARANGE_52		0x6
 
@@ -900,6 +903,7 @@
 #define ID_AA64MMFR2_CNP_SHIFT		0
 
 /* id_aa64dfr0 */
+#define ID_AA64DFR0_MTPMU_SHIFT		48
 #define ID_AA64DFR0_TRBE_SHIFT		44
 #define ID_AA64DFR0_TRACE_FILT_SHIFT	40
 #define ID_AA64DFR0_DOUBLELOCK_SHIFT	36
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 0ead8bfedf20..5b59fe5e26e4 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -239,8 +239,8 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
 	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_ELx_64BIT_ONLY),
 	ARM64_FTR_END,
 };
 
@@ -1956,7 +1956,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,
-		.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
+		.min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT,
 	},
 #ifdef CONFIG_KVM
 	{
@@ -1967,7 +1967,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64PFR0_EL1_SHIFT,
-		.min_field_value = ID_AA64PFR0_EL1_32BIT_64BIT,
+		.min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT,
 	},
 	{
 		.desc = "Protected KVM",
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 10/15] KVM: arm64: Add config register bit definitions
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
                   ` (8 preceding siblings ...)
  2021-08-17  8:11 ` [PATCH v4 09/15] KVM: arm64: Add feature register flag definitions Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-18 15:16   ` Marc Zyngier
  2021-08-17  8:11 ` [PATCH v4 11/15] KVM: arm64: Guest exit handlers for nVHE hyp Fuad Tabba
                   ` (5 subsequent siblings)
  15 siblings, 1 reply; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Add hardware configuration register bit definitions for HCR_EL2
and MDCR_EL2. Future patches toggle these hyp configuration
register bits to trap on certain accesses.

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index a928b2dc0b0f..327120c0089f 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -12,8 +12,13 @@
 #include <asm/types.h>
 
 /* Hyp Configuration Register (HCR) bits */
+
+#define HCR_TID5	(UL(1) << 58)
+#define HCR_DCT		(UL(1) << 57)
 #define HCR_ATA_SHIFT	56
 #define HCR_ATA		(UL(1) << HCR_ATA_SHIFT)
+#define HCR_AMVOFFEN	(UL(1) << 51)
+#define HCR_FIEN	(UL(1) << 47)
 #define HCR_FWB		(UL(1) << 46)
 #define HCR_API		(UL(1) << 41)
 #define HCR_APK		(UL(1) << 40)
@@ -56,6 +61,7 @@
 #define HCR_PTW		(UL(1) << 2)
 #define HCR_SWIO	(UL(1) << 1)
 #define HCR_VM		(UL(1) << 0)
+#define HCR_RES0	((UL(1) << 48) | (UL(1) << 39))
 
 /*
  * The bits we set in HCR:
@@ -277,11 +283,21 @@
 #define CPTR_EL2_TZ	(1 << 8)
 #define CPTR_NVHE_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */
 #define CPTR_EL2_DEFAULT	CPTR_NVHE_EL2_RES1
+#define CPTR_NVHE_EL2_RES0	(GENMASK(63, 32) |	\
+				 GENMASK(29, 21) |	\
+				 GENMASK(19, 14) |	\
+				 BIT(11))
 
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
 #define MDCR_EL2_E2TB_SHIFT	(UL(24))
+#define MDCR_EL2_HPMFZS		(UL(1) << 36)
+#define MDCR_EL2_HPMFZO		(UL(1) << 29)
+#define MDCR_EL2_MTPME		(UL(1) << 28)
+#define MDCR_EL2_TDCC		(UL(1) << 27)
+#define MDCR_EL2_HCCD		(UL(1) << 23)
 #define MDCR_EL2_TTRF		(UL(1) << 19)
+#define MDCR_EL2_HPMD		(UL(1) << 17)
 #define MDCR_EL2_TPMS		(UL(1) << 14)
 #define MDCR_EL2_E2PB_MASK	(UL(0x3))
 #define MDCR_EL2_E2PB_SHIFT	(UL(12))
@@ -293,6 +309,12 @@
 #define MDCR_EL2_TPM		(UL(1) << 6)
 #define MDCR_EL2_TPMCR		(UL(1) << 5)
 #define MDCR_EL2_HPMN_MASK	(UL(0x1F))
+#define MDCR_EL2_RES0		(GENMASK(63, 37) |	\
+				 GENMASK(35, 30) |	\
+				 GENMASK(25, 24) |	\
+				 GENMASK(22, 20) |	\
+				 BIT(18) |		\
+				 GENMASK(16, 15))
 
 /* For compatibility with fault code shared with 32-bit */
 #define FSC_FAULT	ESR_ELx_FSC_FAULT
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 11/15] KVM: arm64: Guest exit handlers for nVHE hyp
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
                   ` (9 preceding siblings ...)
  2021-08-17  8:11 ` [PATCH v4 10/15] KVM: arm64: Add config register bit definitions Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-18 16:45   ` Marc Zyngier
  2021-08-17  8:11 ` [PATCH v4 12/15] KVM: arm64: Add trap handlers for protected VMs Fuad Tabba
                   ` (4 subsequent siblings)
  15 siblings, 1 reply; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Add an array of pointers to handlers for various trap reasons in
nVHE code.

The current code selects how to fixup a guest on exit based on a
series of if/else statements. Future patches will also require
different handling for guest exists. Create an array of handlers
to consolidate them.

No functional change intended as the array isn't populated yet.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 43 +++++++++++++++++++++++++
 arch/arm64/kvm/hyp/nvhe/switch.c        | 33 +++++++++++++++++++
 2 files changed, 76 insertions(+)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a0e78a6027be..5a2b89b96c67 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -409,6 +409,46 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+typedef int (*exit_handle_fn)(struct kvm_vcpu *);
+
+exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu);
+
+static exit_handle_fn kvm_get_hyp_exit_handler(struct kvm_vcpu *vcpu)
+{
+	return is_nvhe_hyp_code() ? kvm_get_nvhe_exit_handler(vcpu) : NULL;
+}
+
+/*
+ * Allow the hypervisor to handle the exit with an exit handler if it has one.
+ *
+ * Returns true if the hypervisor handled the exit, and control should go back
+ * to the guest, or false if it hasn't.
+ */
+static bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu)
+{
+	bool is_handled = false;
+	exit_handle_fn exit_handler = kvm_get_hyp_exit_handler(vcpu);
+
+	if (exit_handler) {
+		/*
+		 * There's limited vcpu context here since it's not synced yet.
+		 * Ensure that relevant vcpu context that might be used by the
+		 * exit_handler is in sync before it's called and if handled.
+		 */
+		*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
+		*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
+
+		is_handled = exit_handler(vcpu);
+
+		if (is_handled) {
+			write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR);
+			write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR);
+		}
+	}
+
+	return is_handled;
+}
+
 /*
  * Return true when we were able to fixup the guest exit and should return to
  * the guest, false when we should restore the host state and return to the
@@ -496,6 +536,9 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 			goto guest;
 	}
 
+	/* Check if there's an exit handler and allow it to handle the exit. */
+	if (kvm_hyp_handle_exit(vcpu))
+		goto guest;
 exit:
 	/* Return to the host kernel and handle the exit */
 	return false;
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 86f3d6482935..b7f25307a7b9 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -158,6 +158,39 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 		write_sysreg(pmu->events_host, pmcntenset_el0);
 }
 
+static exit_handle_fn hyp_exit_handlers[] = {
+	[0 ... ESR_ELx_EC_MAX]		= NULL,
+	[ESR_ELx_EC_WFx]		= NULL,
+	[ESR_ELx_EC_CP15_32]		= NULL,
+	[ESR_ELx_EC_CP15_64]		= NULL,
+	[ESR_ELx_EC_CP14_MR]		= NULL,
+	[ESR_ELx_EC_CP14_LS]		= NULL,
+	[ESR_ELx_EC_CP14_64]		= NULL,
+	[ESR_ELx_EC_HVC32]		= NULL,
+	[ESR_ELx_EC_SMC32]		= NULL,
+	[ESR_ELx_EC_HVC64]		= NULL,
+	[ESR_ELx_EC_SMC64]		= NULL,
+	[ESR_ELx_EC_SYS64]		= NULL,
+	[ESR_ELx_EC_SVE]		= NULL,
+	[ESR_ELx_EC_IABT_LOW]		= NULL,
+	[ESR_ELx_EC_DABT_LOW]		= NULL,
+	[ESR_ELx_EC_SOFTSTP_LOW]	= NULL,
+	[ESR_ELx_EC_WATCHPT_LOW]	= NULL,
+	[ESR_ELx_EC_BREAKPT_LOW]	= NULL,
+	[ESR_ELx_EC_BKPT32]		= NULL,
+	[ESR_ELx_EC_BRK64]		= NULL,
+	[ESR_ELx_EC_FP_ASIMD]		= NULL,
+	[ESR_ELx_EC_PAC]		= NULL,
+};
+
+exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu)
+{
+	u32 esr = kvm_vcpu_get_esr(vcpu);
+	u8 esr_ec = ESR_ELx_EC(esr);
+
+	return hyp_exit_handlers[esr_ec];
+}
+
 /* Switch to the guest for legacy non-VHE systems */
 int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 12/15] KVM: arm64: Add trap handlers for protected VMs
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
                   ` (10 preceding siblings ...)
  2021-08-17  8:11 ` [PATCH v4 11/15] KVM: arm64: Guest exit handlers for nVHE hyp Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-17  8:11 ` [PATCH v4 13/15] KVM: arm64: Move sanitized copies of CPU features Fuad Tabba
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Add trap handlers for protected VMs. These are mainly for Sys64
and debug traps.

No functional change intended as these are not hooked in yet to
the guest exit handlers introduced earlier. So even when trapping
is triggered, the exit handlers would let the host handle it, as
before.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_fixed_config.h | 170 +++++++++
 arch/arm64/include/asm/kvm_host.h         |   2 +
 arch/arm64/include/asm/kvm_hyp.h          |   3 +
 arch/arm64/kvm/Makefile                   |   2 +-
 arch/arm64/kvm/arm.c                      |  11 +
 arch/arm64/kvm/hyp/nvhe/Makefile          |   2 +-
 arch/arm64/kvm/hyp/nvhe/sys_regs.c        | 430 ++++++++++++++++++++++
 arch/arm64/kvm/pkvm.c                     | 185 ++++++++++
 8 files changed, 803 insertions(+), 2 deletions(-)
 create mode 100644 arch/arm64/include/asm/kvm_fixed_config.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
 create mode 100644 arch/arm64/kvm/pkvm.c

diff --git a/arch/arm64/include/asm/kvm_fixed_config.h b/arch/arm64/include/asm/kvm_fixed_config.h
new file mode 100644
index 000000000000..40c18380156d
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_fixed_config.h
@@ -0,0 +1,170 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2021 Google LLC
+ * Author: Fuad Tabba <tabba@google.com>
+ */
+
+#ifndef __ARM64_KVM_FIXED_CONFIG_H__
+#define __ARM64_KVM_FIXED_CONFIG_H__
+
+#include <asm/sysreg.h>
+
+/*
+ * This file contains definitions for features to be allowed or restricted for
+ * guest virtual machines, depending on the mode KVM is running in and on the
+ * type of guest that is running.
+ *
+ * The ALLOW masks represent a bitmask of feature fields that are allowed
+ * without any restrictions as long as they are supported by the system.
+ *
+ * The RESTRICT_UNSIGNED masks, if present, represent unsigned fields for
+ * features that are restricted to support at most the specified feature.
+ *
+ * If a feature field is not present in either, than it is not supported.
+ *
+ * The approach taken for protected VMs is to allow features that are:
+ * - Needed by common Linux distributions (e.g., floating point)
+ * - Trivial to support, e.g., supporting the feature does not introduce or
+ * require tracking of additional state in KVM
+ * - Cannot be trapped or prevent the guest from using anyway
+ */
+
+/*
+ * Allow for protected VMs:
+ * - Floating-point and Advanced SIMD
+ * - Data Independent Timing
+ */
+#define PVM_ID_AA64PFR0_ALLOW (\
+	ARM64_FEATURE_MASK(ID_AA64PFR0_FP) | \
+	ARM64_FEATURE_MASK(ID_AA64PFR0_ASIMD) | \
+	ARM64_FEATURE_MASK(ID_AA64PFR0_DIT) \
+	)
+
+/*
+ * Restrict to the following *unsigned* features for protected VMs:
+ * - AArch64 guests only (no support for AArch32 guests):
+ *	AArch32 adds complexity in trap handling, emulation, condition codes,
+ *	etc...
+ * - RAS (v1)
+ *	Supported by KVM
+ */
+#define PVM_ID_AA64PFR0_RESTRICT_UNSIGNED (\
+	FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL2), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL3), ID_AA64PFR0_ELx_64BIT_ONLY) | \
+	FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_RAS), ID_AA64PFR0_RAS_V1) \
+	)
+
+/*
+ * Allow for protected VMs:
+ * - Branch Target Identification
+ * - Speculative Store Bypassing
+ */
+#define PVM_ID_AA64PFR1_ALLOW (\
+	ARM64_FEATURE_MASK(ID_AA64PFR1_BT) | \
+	ARM64_FEATURE_MASK(ID_AA64PFR1_SSBS) \
+	)
+
+/*
+ * No support for Scalable Vectors for protected VMs:
+ *	Requires additional support from KVM, e.g., context-switching and
+ *	trapping at EL2
+ */
+#define PVM_ID_AA64ZFR0_ALLOW (0ULL)
+
+/*
+ * No support for debug, including breakpoints, and watchpoints for protected
+ * VMs:
+ *	The Arm architecture mandates support for at least the Armv8 debug
+ *	architecture, which would include at least 2 hardware breakpoints and
+ *	watchpoints. Providing that support to protected guests adds
+ *	considerable state and complexity. Therefore, the reserved value of 0 is
+ *	used for debug-related fields.
+ */
+#define PVM_ID_AA64DFR0_ALLOW (0ULL)
+
+/*
+ * Allow for protected VMs:
+ * - Mixed-endian
+ * - Distinction between Secure and Non-secure Memory
+ * - Mixed-endian at EL0 only
+ * - Non-context synchronizing exception entry and exit
+ */
+#define PVM_ID_AA64MMFR0_ALLOW (\
+	ARM64_FEATURE_MASK(ID_AA64MMFR0_BIGENDEL) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR0_SNSMEM) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR0_BIGENDEL0) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR0_EXS) \
+	)
+
+/*
+ * Restrict to the following *unsigned* features for protected VMs:
+ * - 40-bit IPA
+ * - 16-bit ASID
+ */
+#define PVM_ID_AA64MMFR0_RESTRICT_UNSIGNED (\
+	FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_PARANGE), ID_AA64MMFR0_PARANGE_40) | \
+	FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64MMFR0_ASID), ID_AA64MMFR0_ASID_16) \
+	)
+
+/*
+ * Allow for protected VMs:
+ * - Hardware translation table updates to Access flag and Dirty state
+ * - Number of VMID bits from CPU
+ * - Hierarchical Permission Disables
+ * - Privileged Access Never
+ * - SError interrupt exceptions from speculative reads
+ * - Enhanced Translation Synchronization
+ */
+#define PVM_ID_AA64MMFR1_ALLOW (\
+	ARM64_FEATURE_MASK(ID_AA64MMFR1_HADBS) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR1_VMIDBITS) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR1_HPD) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR1_PAN) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR1_SPECSEI) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR1_ETS) \
+	)
+
+/*
+ * Allow for protected VMs:
+ * - Common not Private translations
+ * - User Access Override
+ * - IESB bit in the SCTLR_ELx registers
+ * - Unaligned single-copy atomicity and atomic functions
+ * - ESR_ELx.EC value on an exception by read access to feature ID space
+ * - TTL field in address operations.
+ * - Break-before-make sequences when changing translation block size
+ * - E0PDx mechanism
+ */
+#define PVM_ID_AA64MMFR2_ALLOW (\
+	ARM64_FEATURE_MASK(ID_AA64MMFR2_CNP) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR2_UAO) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR2_IESB) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR2_AT) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR2_IDS) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR2_TTL) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR2_BBM) | \
+	ARM64_FEATURE_MASK(ID_AA64MMFR2_E0PD) \
+	)
+
+/*
+ * Allow for protected VMs all features in this register:
+ * - LS64
+ * - XS
+ * - I8MM
+ * - DGB
+ * - BF16
+ * - SPECRES
+ * - SB
+ * - FRINTTS
+ * - PAuth
+ * - FPAC
+ * - LRCPC
+ * - FCMA
+ * - JSCVT
+ * - DPB
+ */
+#define PVM_ID_AA64ISAR1_ALLOW (~0ULL)
+
+#endif /* __ARM64_KVM_FIXED_CONFIG_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index ac67d5699c68..e1ceadd69575 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -780,6 +780,8 @@ static inline bool kvm_vm_is_protected(struct kvm *kvm)
 	return false;
 }
 
+void kvm_init_protected_traps(struct kvm_vcpu *vcpu);
+
 int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
 bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 657d0c94cf82..3f4866322f85 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -115,7 +115,10 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
 void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
 #endif
 
+extern u64 kvm_nvhe_sym(id_aa64pfr0_el1_sys_val);
+extern u64 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val);
 extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
 extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
+extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val);
 
 #endif /* __ARM64_KVM_HYP_H__ */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 989bb5dad2c8..0be63f5c495f 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -14,7 +14,7 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
 	 $(KVM)/vfio.o $(KVM)/irqchip.o $(KVM)/binary_stats.o \
 	 arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
 	 inject_fault.o va_layout.o handle_exit.o \
-	 guest.o debug.o reset.o sys_regs.o \
+	 guest.o debug.o pkvm.o reset.o sys_regs.o \
 	 vgic-sys-reg-v3.o fpsimd.o pmu.o \
 	 arch_timer.o trng.o\
 	 vgic/vgic.o vgic/vgic-init.o \
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 14b12f2c08c0..75fff5cd6a6c 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -618,6 +618,14 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
 
 	ret = kvm_arm_pmu_v3_enable(vcpu);
 
+	/*
+	 * Initialize traps for protected VMs.
+	 * NOTE: Move trap initialization to EL2 once the code is in place for
+	 * maintaining protected VM state at EL2 instead of the host.
+	 */
+	if (kvm_vm_is_protected(kvm))
+		kvm_init_protected_traps(vcpu);
+
 	return ret;
 }
 
@@ -1781,8 +1789,11 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
 	void *addr = phys_to_virt(hyp_mem_base);
 	int ret;
 
+	kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+	kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
 	kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
 	kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
+	kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
 
 	ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP);
 	if (ret)
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index 5df6193fc430..a23f417a0c20 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs))
 
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
 	 hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \
-	 cache.o setup.o mm.o mem_protect.o
+	 cache.o setup.o mm.o mem_protect.o sys_regs.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 	 ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o
 obj-y += $(lib-objs)
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
new file mode 100644
index 000000000000..cd126d45cbcc
--- /dev/null
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -0,0 +1,430 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2021 Google LLC
+ * Author: Fuad Tabba <tabba@google.com>
+ */
+
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_asm.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_fixed_config.h>
+#include <asm/kvm_mmu.h>
+
+#include <hyp/adjust_pc.h>
+
+#include "../../sys_regs.h"
+
+/*
+ * Copies of the host's CPU features registers holding sanitized values.
+ */
+u64 id_aa64pfr0_el1_sys_val;
+u64 id_aa64pfr1_el1_sys_val;
+u64 id_aa64mmfr2_el1_sys_val;
+
+/*
+ * Inject an unknown/undefined exception to the guest.
+ */
+static void inject_undef(struct kvm_vcpu *vcpu)
+{
+	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
+
+	vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 |
+			     KVM_ARM64_EXCEPT_AA64_ELx_SYNC |
+			     KVM_ARM64_PENDING_EXCEPTION);
+
+	__kvm_adjust_pc(vcpu);
+
+	write_sysreg_el1(esr, SYS_ESR);
+	write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR);
+}
+
+/*
+ * Accessor for undefined accesses.
+ */
+static bool undef_access(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *r)
+{
+	inject_undef(vcpu);
+	return false;
+}
+
+/*
+ * Accessors for feature registers.
+ *
+ * If access is allowed, set the regval to the protected VM's view of the
+ * register and return true.
+ * Otherwise, inject an undefined exception and return false.
+ */
+
+/*
+ * Returns the restricted features values of the feature register based on the
+ * limitations in restrict_fields.
+ * Note: Use only for unsigned feature field values.
+ */
+static u64 get_restricted_features_unsigned(u64 sys_reg_val,
+					    u64 restrict_fields)
+{
+	u64 value = 0UL;
+	u64 mask = GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0);
+
+	/*
+	 * According to the Arm Architecture Reference Manual, feature fields
+	 * use increasing values to indicate increases in functionality.
+	 * Iterate over the restricted feature fields and calculate the minimum
+	 * unsigned value between the one supported by the system, and what the
+	 * value is being restricted to.
+	 */
+	while (sys_reg_val && restrict_fields) {
+		value |= min(sys_reg_val & mask, restrict_fields & mask);
+		sys_reg_val &= ~mask;
+		restrict_fields &= ~mask;
+		mask <<= ARM64_FEATURE_FIELD_BITS;
+	}
+
+	return value;
+}
+
+/* Accessor for ID_AA64PFR0_EL1. */
+static bool pvm_access_id_aa64pfr0(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	const struct kvm *kvm = (const struct kvm *) kern_hyp_va(vcpu->kvm);
+	u64 set_mask = 0;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	set_mask |= get_restricted_features_unsigned(id_aa64pfr0_el1_sys_val,
+		PVM_ID_AA64PFR0_RESTRICT_UNSIGNED);
+
+	/* Spectre and Meltdown mitigation in KVM */
+	set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2),
+			       (u64)kvm->arch.pfr0_csv2);
+	set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3),
+			       (u64)kvm->arch.pfr0_csv3);
+
+	p->regval = (id_aa64pfr0_el1_sys_val & PVM_ID_AA64PFR0_ALLOW) |
+		    set_mask;
+	return true;
+}
+
+/* Accessor for ID_AA64PFR1_EL1. */
+static bool pvm_access_id_aa64pfr1(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = id_aa64pfr1_el1_sys_val & PVM_ID_AA64PFR1_ALLOW;
+	return true;
+}
+
+/* Accessor for ID_AA64ZFR0_EL1. */
+static bool pvm_access_id_aa64zfr0(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/*
+	 * No support for Scalable Vectors, therefore, pKVM has no sanitized
+	 * copy of the feature id register.
+	 */
+	BUILD_BUG_ON(PVM_ID_AA64ZFR0_ALLOW != 0ULL);
+
+	p->regval = 0;
+	return true;
+}
+
+/* Accessor for ID_AA64DFR0_EL1. */
+static bool pvm_access_id_aa64dfr0(struct kvm_vcpu *vcpu,
+				   struct sys_reg_params *p,
+				   const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/*
+	 * No support for debug, including breakpoints, and watchpoints,
+	 * therefore, pKVM has no sanitized copy of the feature id register.
+	 */
+	BUILD_BUG_ON(PVM_ID_AA64DFR0_ALLOW != 0ULL);
+
+	p->regval = 0;
+	return true;
+}
+
+/*
+ * No restrictions on ID_AA64ISAR1_EL1 features, therefore, pKVM has no
+ * sanitized copy of the feature id register and it is handled by the host.
+ */
+static_assert(PVM_ID_AA64ISAR1_ALLOW == ~0ULL);
+
+/* Accessor for ID_AA64MMFR0_EL1. */
+static bool pvm_access_id_aa64mmfr0(struct kvm_vcpu *vcpu,
+				    struct sys_reg_params *p,
+				    const struct sys_reg_desc *r)
+{
+	u64 set_mask = 0;
+
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	set_mask |= get_restricted_features_unsigned(id_aa64mmfr0_el1_sys_val,
+		PVM_ID_AA64MMFR0_RESTRICT_UNSIGNED);
+
+	p->regval = (id_aa64mmfr0_el1_sys_val & PVM_ID_AA64MMFR0_ALLOW) |
+		     set_mask;
+	return true;
+}
+
+/* Accessor for ID_AA64MMFR1_EL1. */
+static bool pvm_access_id_aa64mmfr1(struct kvm_vcpu *vcpu,
+				    struct sys_reg_params *p,
+				    const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = id_aa64mmfr1_el1_sys_val & PVM_ID_AA64MMFR1_ALLOW;
+	return true;
+}
+
+/* Accessor for ID_AA64MMFR2_EL1. */
+static bool pvm_access_id_aa64mmfr2(struct kvm_vcpu *vcpu,
+				    struct sys_reg_params *p,
+				    const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	p->regval = id_aa64mmfr2_el1_sys_val & PVM_ID_AA64MMFR2_ALLOW;
+	return true;
+}
+
+/*
+ * Accessor for AArch32 Processor Feature Registers.
+ *
+ * The value of these registers is "unknown" according to the spec if AArch32
+ * isn't supported.
+ */
+static bool pvm_access_id_aarch32(struct kvm_vcpu *vcpu,
+				  struct sys_reg_params *p,
+				  const struct sys_reg_desc *r)
+{
+	if (p->is_write)
+		return undef_access(vcpu, p, r);
+
+	/*
+	 * No support for AArch32 guests, therefore, pKVM has no sanitized copy
+	 * of AArch32 feature id registers.
+	 */
+	BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1),
+		     PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) >
+			ID_AA64PFR0_ELx_64BIT_ONLY);
+
+	/* Use 0 for architecturally "unknown" values. */
+	p->regval = 0;
+	return true;
+}
+
+/* Mark the specified system register as an AArch32 feature register. */
+#define AARCH32(REG) { SYS_DESC(REG), .access = pvm_access_id_aarch32 }
+
+/* Mark the specified system register as not being handled in hyp. */
+#define HOST_HANDLED(REG) { SYS_DESC(REG), .access = NULL }
+
+/*
+ * Architected system registers.
+ * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
+ *
+ * NOTE: Anything not explicitly listed here will be *restricted by default*,
+ * i.e., it will lead to injecting an exception into the guest.
+ */
+static const struct sys_reg_desc pvm_sys_reg_descs[] = {
+	/* Cache maintenance by set/way operations are restricted. */
+
+	/* Debug and Trace Registers are all restricted */
+
+	/* AArch64 mappings of the AArch32 ID registers */
+	/* CRm=1 */
+	AARCH32(SYS_ID_PFR0_EL1),
+	AARCH32(SYS_ID_PFR1_EL1),
+	AARCH32(SYS_ID_DFR0_EL1),
+	AARCH32(SYS_ID_AFR0_EL1),
+	AARCH32(SYS_ID_MMFR0_EL1),
+	AARCH32(SYS_ID_MMFR1_EL1),
+	AARCH32(SYS_ID_MMFR2_EL1),
+	AARCH32(SYS_ID_MMFR3_EL1),
+
+	/* CRm=2 */
+	AARCH32(SYS_ID_ISAR0_EL1),
+	AARCH32(SYS_ID_ISAR1_EL1),
+	AARCH32(SYS_ID_ISAR2_EL1),
+	AARCH32(SYS_ID_ISAR3_EL1),
+	AARCH32(SYS_ID_ISAR4_EL1),
+	AARCH32(SYS_ID_ISAR5_EL1),
+	AARCH32(SYS_ID_MMFR4_EL1),
+	AARCH32(SYS_ID_ISAR6_EL1),
+
+	/* CRm=3 */
+	AARCH32(SYS_MVFR0_EL1),
+	AARCH32(SYS_MVFR1_EL1),
+	AARCH32(SYS_MVFR2_EL1),
+	AARCH32(SYS_ID_PFR2_EL1),
+	AARCH32(SYS_ID_DFR1_EL1),
+	AARCH32(SYS_ID_MMFR5_EL1),
+
+	/* AArch64 ID registers */
+	/* CRm=4 */
+	{ SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = pvm_access_id_aa64pfr0 },
+	{ SYS_DESC(SYS_ID_AA64PFR1_EL1), .access = pvm_access_id_aa64pfr1 },
+	{ SYS_DESC(SYS_ID_AA64ZFR0_EL1), .access = pvm_access_id_aa64zfr0 },
+	{ SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = pvm_access_id_aa64dfr0 },
+	HOST_HANDLED(SYS_ID_AA64DFR1_EL1),
+	HOST_HANDLED(SYS_ID_AA64AFR0_EL1),
+	HOST_HANDLED(SYS_ID_AA64AFR1_EL1),
+	HOST_HANDLED(SYS_ID_AA64ISAR0_EL1),
+	HOST_HANDLED(SYS_ID_AA64ISAR1_EL1),
+	{ SYS_DESC(SYS_ID_AA64MMFR0_EL1), .access = pvm_access_id_aa64mmfr0 },
+	{ SYS_DESC(SYS_ID_AA64MMFR1_EL1), .access = pvm_access_id_aa64mmfr1 },
+	{ SYS_DESC(SYS_ID_AA64MMFR2_EL1), .access = pvm_access_id_aa64mmfr2 },
+
+	HOST_HANDLED(SYS_SCTLR_EL1),
+	HOST_HANDLED(SYS_ACTLR_EL1),
+	HOST_HANDLED(SYS_CPACR_EL1),
+
+	HOST_HANDLED(SYS_RGSR_EL1),
+	HOST_HANDLED(SYS_GCR_EL1),
+
+	/* Scalable Vector Registers are restricted. */
+
+	HOST_HANDLED(SYS_TTBR0_EL1),
+	HOST_HANDLED(SYS_TTBR1_EL1),
+	HOST_HANDLED(SYS_TCR_EL1),
+
+	HOST_HANDLED(SYS_APIAKEYLO_EL1),
+	HOST_HANDLED(SYS_APIAKEYHI_EL1),
+	HOST_HANDLED(SYS_APIBKEYLO_EL1),
+	HOST_HANDLED(SYS_APIBKEYHI_EL1),
+	HOST_HANDLED(SYS_APDAKEYLO_EL1),
+	HOST_HANDLED(SYS_APDAKEYHI_EL1),
+	HOST_HANDLED(SYS_APDBKEYLO_EL1),
+	HOST_HANDLED(SYS_APDBKEYHI_EL1),
+	HOST_HANDLED(SYS_APGAKEYLO_EL1),
+	HOST_HANDLED(SYS_APGAKEYHI_EL1),
+
+	HOST_HANDLED(SYS_AFSR0_EL1),
+	HOST_HANDLED(SYS_AFSR1_EL1),
+	HOST_HANDLED(SYS_ESR_EL1),
+
+	HOST_HANDLED(SYS_ERRIDR_EL1),
+	HOST_HANDLED(SYS_ERRSELR_EL1),
+	HOST_HANDLED(SYS_ERXFR_EL1),
+	HOST_HANDLED(SYS_ERXCTLR_EL1),
+	HOST_HANDLED(SYS_ERXSTATUS_EL1),
+	HOST_HANDLED(SYS_ERXADDR_EL1),
+	HOST_HANDLED(SYS_ERXMISC0_EL1),
+	HOST_HANDLED(SYS_ERXMISC1_EL1),
+
+	HOST_HANDLED(SYS_TFSR_EL1),
+	HOST_HANDLED(SYS_TFSRE0_EL1),
+
+	HOST_HANDLED(SYS_FAR_EL1),
+	HOST_HANDLED(SYS_PAR_EL1),
+
+	/* Performance Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_MAIR_EL1),
+	HOST_HANDLED(SYS_AMAIR_EL1),
+
+	/* Limited Ordering Regions Registers are restricted. */
+
+	HOST_HANDLED(SYS_VBAR_EL1),
+	HOST_HANDLED(SYS_DISR_EL1),
+
+	/* GIC CPU Interface registers are restricted. */
+
+	HOST_HANDLED(SYS_CONTEXTIDR_EL1),
+	HOST_HANDLED(SYS_TPIDR_EL1),
+
+	HOST_HANDLED(SYS_SCXTNUM_EL1),
+
+	HOST_HANDLED(SYS_CNTKCTL_EL1),
+
+	HOST_HANDLED(SYS_CCSIDR_EL1),
+	HOST_HANDLED(SYS_CLIDR_EL1),
+	HOST_HANDLED(SYS_CSSELR_EL1),
+	HOST_HANDLED(SYS_CTR_EL0),
+
+	/* Performance Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_TPIDR_EL0),
+	HOST_HANDLED(SYS_TPIDRRO_EL0),
+
+	HOST_HANDLED(SYS_SCXTNUM_EL0),
+
+	/* Activity Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_CNTP_TVAL_EL0),
+	HOST_HANDLED(SYS_CNTP_CTL_EL0),
+	HOST_HANDLED(SYS_CNTP_CVAL_EL0),
+
+	/* Performance Monitoring Registers are restricted. */
+
+	HOST_HANDLED(SYS_DACR32_EL2),
+	HOST_HANDLED(SYS_IFSR32_EL2),
+	HOST_HANDLED(SYS_FPEXC32_EL2),
+};
+
+/*
+ * Handler for protected VM MSR, MRS or System instruction execution in AArch64.
+ *
+ * Return 1 if handled, or 0 if not.
+ */
+int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu)
+{
+	const struct sys_reg_desc *r;
+	struct sys_reg_params params;
+	unsigned long esr = kvm_vcpu_get_esr(vcpu);
+	int Rt = kvm_vcpu_sys_get_rt(vcpu);
+
+	params = esr_sys64_to_params(esr);
+	params.regval = vcpu_get_reg(vcpu, Rt);
+
+	r = find_reg(&params, pvm_sys_reg_descs, ARRAY_SIZE(pvm_sys_reg_descs));
+
+	/* Undefined access (RESTRICTED). */
+	if (r == NULL) {
+		inject_undef(vcpu);
+		return 1;
+	}
+
+	/* Handled by the host (HOST_HANDLED) */
+	if (r->access == NULL)
+		return 0;
+
+	/* Handled by hyp: skip instruction if instructed to do so. */
+	if (r->access(vcpu, &params, r))
+		__kvm_skip_instr(vcpu);
+
+	vcpu_set_reg(vcpu, Rt, params.regval);
+	return 1;
+}
+
+/*
+ * Handler for protected VM restricted exceptions.
+ *
+ * Inject an undefined exception into the guest and return 1 to indicate that
+ * it was handled.
+ */
+int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu)
+{
+	inject_undef(vcpu);
+	return 1;
+}
diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c
new file mode 100644
index 000000000000..a66a74e65989
--- /dev/null
+++ b/arch/arm64/kvm/pkvm.c
@@ -0,0 +1,185 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM host (EL1) interface to Protected KVM (pkvm) code at EL2.
+ *
+ * Copyright (C) 2021 Google LLC
+ * Author: Fuad Tabba <tabba@google.com>
+ */
+
+#include <linux/kvm_host.h>
+#include <linux/mm.h>
+
+#include <asm/kvm_fixed_config.h>
+
+/*
+ * Set trap register values for features not allowed in ID_AA64PFR0.
+ */
+static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64PFR0_ALLOW |
+				PVM_ID_AA64PFR0_RESTRICT_UNSIGNED;
+	u64 hcr_set = 0;
+	u64 hcr_clear = 0;
+	u64 cptr_set = 0;
+
+	/* Trap AArch32 guests */
+	if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), feature_ids) <
+		    ID_AA64PFR0_ELx_32BIT_64BIT ||
+	    FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1), feature_ids) <
+		    ID_AA64PFR0_ELx_32BIT_64BIT)
+		hcr_set |= HCR_RW | HCR_TID0;
+
+	/* Trap RAS unless all current versions are supported */
+	if (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_RAS), feature_ids) <
+	    ID_AA64PFR0_RAS_V1P1) {
+		hcr_set |= HCR_TERR | HCR_TEA;
+		hcr_clear |= HCR_FIEN;
+	}
+
+	/* Trap AMU */
+	if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_AMU), feature_ids)) {
+		hcr_clear |= HCR_AMVOFFEN;
+		cptr_set |= CPTR_EL2_TAM;
+	}
+
+	/* Trap ASIMD */
+	if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_ASIMD), feature_ids))
+		cptr_set |= CPTR_EL2_TFP;
+
+	/* Trap SVE */
+	if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_SVE), feature_ids))
+		cptr_set |= CPTR_EL2_TZ;
+
+	vcpu->arch.hcr_el2 |= hcr_set;
+	vcpu->arch.hcr_el2 &= ~hcr_clear;
+	vcpu->arch.cptr_el2 |= cptr_set;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64PFR1.
+ */
+static void pvm_init_traps_aa64pfr1(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64PFR1_ALLOW;
+	u64 hcr_set = 0;
+	u64 hcr_clear = 0;
+
+	/* Memory Tagging: Trap and Treat as Untagged if not allowed. */
+	if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_MTE), feature_ids)) {
+		hcr_set |= HCR_TID5;
+		hcr_clear |= HCR_DCT | HCR_ATA;
+	}
+
+	vcpu->arch.hcr_el2 |= hcr_set;
+	vcpu->arch.hcr_el2 &= ~hcr_clear;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64DFR0.
+ */
+static void pvm_init_traps_aa64dfr0(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64DFR0_ALLOW;
+	u64 mdcr_set = 0;
+	u64 mdcr_clear = 0;
+	u64 cptr_set = 0;
+
+	/* Trap/constrain PMU */
+	if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), feature_ids)) {
+		mdcr_set |= MDCR_EL2_TPM | MDCR_EL2_TPMCR;
+		mdcr_clear |= MDCR_EL2_HPME | MDCR_EL2_MTPME |
+			      MDCR_EL2_HPMN_MASK;
+	}
+
+	/* Trap Debug */
+	if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), feature_ids))
+		mdcr_set |= MDCR_EL2_TDRA | MDCR_EL2_TDA | MDCR_EL2_TDE;
+
+	/* Trap OS Double Lock */
+	if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_DOUBLELOCK), feature_ids))
+		mdcr_set |= MDCR_EL2_TDOSA;
+
+	/* Trap SPE */
+	if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMSVER), feature_ids)) {
+		mdcr_set |= MDCR_EL2_TPMS;
+		mdcr_clear |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
+	}
+
+	/* Trap Trace Filter */
+	if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_TRACE_FILT), feature_ids))
+		mdcr_set |= MDCR_EL2_TTRF;
+
+	/* Trap Trace */
+	if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_TRACEVER), feature_ids))
+		cptr_set |= CPTR_EL2_TTA;
+
+	vcpu->arch.mdcr_el2 |= mdcr_set;
+	vcpu->arch.mdcr_el2 &= ~mdcr_clear;
+	vcpu->arch.cptr_el2 |= cptr_set;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64MMFR0.
+ */
+static void pvm_init_traps_aa64mmfr0(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR0_ALLOW |
+				PVM_ID_AA64MMFR0_RESTRICT_UNSIGNED;
+	u64 mdcr_set = 0;
+
+	/* Trap Debug Communications Channel registers */
+	if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_FGT), feature_ids))
+		mdcr_set |= MDCR_EL2_TDCC;
+
+	vcpu->arch.mdcr_el2 |= mdcr_set;
+}
+
+/*
+ * Set trap register values for features not allowed in ID_AA64MMFR1.
+ */
+static void pvm_init_traps_aa64mmfr1(struct kvm_vcpu *vcpu)
+{
+	const u64 feature_ids = PVM_ID_AA64MMFR1_ALLOW;
+	u64 hcr_set = 0;
+
+	/* Trap LOR */
+	if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_LOR), feature_ids))
+		hcr_set |= HCR_TLOR;
+
+	vcpu->arch.hcr_el2 |= hcr_set;
+}
+
+/*
+ * Set baseline trap register values.
+ */
+static void pvm_init_trap_regs(struct kvm_vcpu *vcpu)
+{
+	const u64 hcr_trap_feat_regs = HCR_TID3;
+	const u64 hcr_trap_impdef = HCR_TACR | HCR_TIDCP | HCR_TID1;
+
+	/*
+	 * Always trap:
+	 * - Feature id registers: to control features exposed to guests
+	 * - Implementation-defined features
+	 */
+	vcpu->arch.hcr_el2 |= hcr_trap_feat_regs | hcr_trap_impdef;
+
+	/* Clear res0 and set res1 bits to trap potential new features. */
+	vcpu->arch.hcr_el2 &= ~(HCR_RES0);
+	vcpu->arch.mdcr_el2 &= ~(MDCR_EL2_RES0);
+	vcpu->arch.cptr_el2 |= CPTR_NVHE_EL2_RES1;
+	vcpu->arch.cptr_el2 &= ~(CPTR_NVHE_EL2_RES0);
+}
+
+/*
+ * Initialize trap register values for protected VMs.
+ */
+void kvm_init_protected_traps(struct kvm_vcpu *vcpu)
+{
+	pvm_init_trap_regs(vcpu);
+	pvm_init_traps_aa64pfr0(vcpu);
+	pvm_init_traps_aa64pfr1(vcpu);
+	pvm_init_traps_aa64dfr0(vcpu);
+	pvm_init_traps_aa64mmfr0(vcpu);
+	pvm_init_traps_aa64mmfr1(vcpu);
+}
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 13/15] KVM: arm64: Move sanitized copies of CPU features
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
                   ` (11 preceding siblings ...)
  2021-08-17  8:11 ` [PATCH v4 12/15] KVM: arm64: Add trap handlers for protected VMs Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-17  8:11 ` [PATCH v4 14/15] KVM: arm64: Trap access to pVM restricted features Fuad Tabba
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Move the sanitized copies of the CPU feature registers to the
recently created sys_regs.c. This consolidates all copies in a
more relevant file.

No functional change intended.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 6 ------
 arch/arm64/kvm/hyp/nvhe/sys_regs.c    | 2 ++
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index d938ce95d3bd..925c7db7fa34 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -25,12 +25,6 @@ struct host_kvm host_kvm;
 
 static struct hyp_pool host_s2_pool;
 
-/*
- * Copies of the host's CPU features registers holding sanitized values.
- */
-u64 id_aa64mmfr0_el1_sys_val;
-u64 id_aa64mmfr1_el1_sys_val;
-
 static const u8 pkvm_hyp_id = 1;
 
 static void *host_s2_zalloc_pages_exact(size_t size)
diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
index cd126d45cbcc..d641bae0467d 100644
--- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c
+++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c
@@ -20,6 +20,8 @@
  */
 u64 id_aa64pfr0_el1_sys_val;
 u64 id_aa64pfr1_el1_sys_val;
+u64 id_aa64mmfr0_el1_sys_val;
+u64 id_aa64mmfr1_el1_sys_val;
 u64 id_aa64mmfr2_el1_sys_val;
 
 /*
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 14/15] KVM: arm64: Trap access to pVM restricted features
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
                   ` (12 preceding siblings ...)
  2021-08-17  8:11 ` [PATCH v4 13/15] KVM: arm64: Move sanitized copies of CPU features Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-17  8:11 ` [PATCH v4 15/15] KVM: arm64: Handle protected guests at 32 bits Fuad Tabba
  2021-08-20 10:34 ` [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Marc Zyngier
  15 siblings, 0 replies; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Trap accesses to restricted features for VMs running in protected
mode.

Access to feature registers are emulated, and only supported
features are exposed to protected VMs.

Accesses to restricted registers as well as restricted
instructions are trapped, and an undefined exception is injected
into the protected guests, i.e., with EC = 0x0 (unknown reason).
This EC is the one used, according to the Arm Architecture
Reference Manual, for unallocated or undefined system registers
or instructions.

Only affects the functionality of protected VMs. Otherwise,
should not affect non-protected VMs when KVM is running in
protected mode.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h |  3 +++
 arch/arm64/kvm/hyp/nvhe/switch.c        | 34 ++++++++++++++-----------
 2 files changed, 22 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 5a2b89b96c67..8431f1514280 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -33,6 +33,9 @@
 extern struct exception_table_entry __start___kvm_ex_table;
 extern struct exception_table_entry __stop___kvm_ex_table;
 
+int kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu);
+int kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu);
+
 /* Check whether the FP regs were dirtied while in the host-side run loop: */
 static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index b7f25307a7b9..398e62098898 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -159,27 +159,27 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 }
 
 static exit_handle_fn hyp_exit_handlers[] = {
-	[0 ... ESR_ELx_EC_MAX]		= NULL,
+	[0 ... ESR_ELx_EC_MAX]		= kvm_handle_pvm_restricted,
 	[ESR_ELx_EC_WFx]		= NULL,
-	[ESR_ELx_EC_CP15_32]		= NULL,
-	[ESR_ELx_EC_CP15_64]		= NULL,
-	[ESR_ELx_EC_CP14_MR]		= NULL,
-	[ESR_ELx_EC_CP14_LS]		= NULL,
-	[ESR_ELx_EC_CP14_64]		= NULL,
+	[ESR_ELx_EC_CP15_32]		= kvm_handle_pvm_restricted,
+	[ESR_ELx_EC_CP15_64]		= kvm_handle_pvm_restricted,
+	[ESR_ELx_EC_CP14_MR]		= kvm_handle_pvm_restricted,
+	[ESR_ELx_EC_CP14_LS]		= kvm_handle_pvm_restricted,
+	[ESR_ELx_EC_CP14_64]		= kvm_handle_pvm_restricted,
 	[ESR_ELx_EC_HVC32]		= NULL,
 	[ESR_ELx_EC_SMC32]		= NULL,
 	[ESR_ELx_EC_HVC64]		= NULL,
 	[ESR_ELx_EC_SMC64]		= NULL,
-	[ESR_ELx_EC_SYS64]		= NULL,
-	[ESR_ELx_EC_SVE]		= NULL,
+	[ESR_ELx_EC_SYS64]		= kvm_handle_pvm_sys64,
+	[ESR_ELx_EC_SVE]		= kvm_handle_pvm_restricted,
 	[ESR_ELx_EC_IABT_LOW]		= NULL,
 	[ESR_ELx_EC_DABT_LOW]		= NULL,
-	[ESR_ELx_EC_SOFTSTP_LOW]	= NULL,
-	[ESR_ELx_EC_WATCHPT_LOW]	= NULL,
-	[ESR_ELx_EC_BREAKPT_LOW]	= NULL,
-	[ESR_ELx_EC_BKPT32]		= NULL,
-	[ESR_ELx_EC_BRK64]		= NULL,
-	[ESR_ELx_EC_FP_ASIMD]		= NULL,
+	[ESR_ELx_EC_SOFTSTP_LOW]	= kvm_handle_pvm_restricted,
+	[ESR_ELx_EC_WATCHPT_LOW]	= kvm_handle_pvm_restricted,
+	[ESR_ELx_EC_BREAKPT_LOW]	= kvm_handle_pvm_restricted,
+	[ESR_ELx_EC_BKPT32]		= kvm_handle_pvm_restricted,
+	[ESR_ELx_EC_BRK64]		= kvm_handle_pvm_restricted,
+	[ESR_ELx_EC_FP_ASIMD]		= kvm_handle_pvm_restricted,
 	[ESR_ELx_EC_PAC]		= NULL,
 };
 
@@ -188,7 +188,11 @@ exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu)
 	u32 esr = kvm_vcpu_get_esr(vcpu);
 	u8 esr_ec = ESR_ELx_EC(esr);
 
-	return hyp_exit_handlers[esr_ec];
+	/* For now, only protected VMs have exit handlers. */
+	if (unlikely(kvm_vm_is_protected(kern_hyp_va(vcpu->kvm))))
+		return hyp_exit_handlers[esr_ec];
+	else
+		return NULL;
 }
 
 /* Switch to the guest for legacy non-VHE systems */
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v4 15/15] KVM: arm64: Handle protected guests at 32 bits
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
                   ` (13 preceding siblings ...)
  2021-08-17  8:11 ` [PATCH v4 14/15] KVM: arm64: Trap access to pVM restricted features Fuad Tabba
@ 2021-08-17  8:11 ` Fuad Tabba
  2021-08-19  8:10   ` Oliver Upton
  2021-08-20 10:34 ` [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Marc Zyngier
  15 siblings, 1 reply; 30+ messages in thread
From: Fuad Tabba @ 2021-08-17  8:11 UTC (permalink / raw)
  To: kvmarm
  Cc: maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team, tabba

Protected KVM does not support protected AArch32 guests. However,
it is possible for the guest to force run AArch32, potentially
causing problems. Add an extra check so that if the hypervisor
catches the guest doing that, it can prevent the guest from
running again by resetting vcpu->arch.target and returning
ARM_EXCEPTION_IL.

If this were to happen, The VMM can try and fix it by re-
initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is
likely not possible for protected VMs.

Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
AArch32 systems")

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 37 ++++++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 398e62098898..0c24b7f473bf 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -20,6 +20,7 @@
 #include <asm/kprobes.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_emulate.h>
+#include <asm/kvm_fixed_config.h>
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
 #include <asm/fpsimd.h>
@@ -195,6 +196,39 @@ exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu)
 		return NULL;
 }
 
+/*
+ * Some guests (e.g., protected VMs) might not be allowed to run in AArch32. The
+ * check below is based on the one in kvm_arch_vcpu_ioctl_run().
+ * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a
+ * guest from dropping to AArch32 EL0 if implemented by the CPU. If the
+ * hypervisor spots a guest in such a state ensure it is handled, and don't
+ * trust the host to spot or fix it.
+ *
+ * Returns true if the check passed and the guest run loop can continue, or
+ * false if the guest should exit to the host.
+ */
+static bool check_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code)
+{
+	if (kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
+	    vcpu_mode_is_32bit(vcpu) &&
+	    FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0),
+					 PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) <
+		ID_AA64PFR0_ELx_32BIT_64BIT) {
+		/*
+		 * As we have caught the guest red-handed, decide that it isn't
+		 * fit for purpose anymore by making the vcpu invalid. The VMM
+		 * can try and fix it by re-initializing the vcpu with
+		 * KVM_ARM_VCPU_INIT, however, this is likely not possible for
+		 * protected VMs.
+		 */
+		vcpu->arch.target = -1;
+		*exit_code = ARM_EXCEPTION_IL;
+		return false;
+	}
+
+	return true;
+}
+
 /* Switch to the guest for legacy non-VHE systems */
 int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
@@ -255,6 +289,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 		/* Jump in the fire! */
 		exit_code = __guest_enter(vcpu);
 
+		if (unlikely(!check_aarch32_guest(vcpu, &exit_code)))
+			break;
+
 		/* And we're baaack! */
 	} while (fixup_guest_exit(vcpu, &exit_code));
 
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
  2021-08-17  8:11 ` [PATCH v4 06/15] KVM: arm64: Restore mdcr_el2 from vcpu Fuad Tabba
@ 2021-08-18 13:13   ` Will Deacon
  2021-08-18 14:42   ` Marc Zyngier
  1 sibling, 0 replies; 30+ messages in thread
From: Will Deacon @ 2021-08-18 13:13 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team

On Tue, Aug 17, 2021 at 09:11:25AM +0100, Fuad Tabba wrote:
> On deactivating traps, restore the value of mdcr_el2 from the
> newly created and preserved host value vcpu context, rather than
> directly reading the hardware register.
> 
> Up until and including this patch the two values are the same,
> i.e., the hardware register and the vcpu one. A future patch will
> be changing the value of mdcr_el2 on activating traps, and this
> ensures that its value will be restored.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_host.h       |  5 ++++-
>  arch/arm64/include/asm/kvm_hyp.h        |  2 +-
>  arch/arm64/kvm/hyp/include/hyp/switch.h |  6 +++++-
>  arch/arm64/kvm/hyp/nvhe/switch.c        | 13 +++++--------
>  arch/arm64/kvm/hyp/vhe/switch.c         | 14 +++++---------
>  arch/arm64/kvm/hyp/vhe/sysreg-sr.c      |  2 +-
>  6 files changed, 21 insertions(+), 21 deletions(-)

Acked-by: Will Deacon <will@kernel.org>

Will

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 07/15] KVM: arm64: Keep mdcr_el2's value as set by __init_el2_debug
  2021-08-17  8:11 ` [PATCH v4 07/15] KVM: arm64: Keep mdcr_el2's value as set by __init_el2_debug Fuad Tabba
@ 2021-08-18 13:17   ` Will Deacon
  0 siblings, 0 replies; 30+ messages in thread
From: Will Deacon @ 2021-08-18 13:17 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team

On Tue, Aug 17, 2021 at 09:11:26AM +0100, Fuad Tabba wrote:
> __init_el2_debug configures mdcr_el2 at initialization based on,
> among other things, available hardware support. Trap deactivation
> doesn't check that, so keep the initial value.
> 
> No functional change intended. However, the value of mdcr_el2
> might be different after deactivating traps than it was before
> this patch.

I think this sentence is very confusing, so I'd remove it. I also don't
think it's correct, as the EL2 initialisation code only manipulates the
bits which are being masked here.

So with that sentence removed:

Acked-by: Will Deacon <will@kernel.org>

Will

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 09/15] KVM: arm64: Add feature register flag definitions
  2021-08-17  8:11 ` [PATCH v4 09/15] KVM: arm64: Add feature register flag definitions Fuad Tabba
@ 2021-08-18 13:21   ` Will Deacon
  0 siblings, 0 replies; 30+ messages in thread
From: Will Deacon @ 2021-08-18 13:21 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team

On Tue, Aug 17, 2021 at 09:11:28AM +0100, Fuad Tabba wrote:
> Add feature register flag definitions to clarify which features
> might be supported.
> 
> Consolidate the various ID_AA64PFR0_ELx flags for all ELs.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/cpufeature.h |  4 ++--
>  arch/arm64/include/asm/sysreg.h     | 12 ++++++++----
>  arch/arm64/kernel/cpufeature.c      |  8 ++++----
>  3 files changed, 14 insertions(+), 10 deletions(-)

Thanks, looks better now:

Acked-by: Will Deacon <will@kernel.org>

Will


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 03/15] KVM: arm64: MDCR_EL2 is a 64-bit register
  2021-08-17  8:11 ` [PATCH v4 03/15] KVM: arm64: MDCR_EL2 is a 64-bit register Fuad Tabba
@ 2021-08-18 14:32   ` Marc Zyngier
  0 siblings, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2021-08-18 14:32 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team

On Tue, 17 Aug 2021 09:11:22 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Fix the places in KVM that treat MDCR_EL2 as a 32-bit register.
> More recent features (e.g., FEAT_SPEv1p2) use bits above 31.
> 
> No functional change intended.
> 
> Acked-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_arm.h   | 20 ++++++++++----------
>  arch/arm64/include/asm/kvm_asm.h   |  2 +-
>  arch/arm64/include/asm/kvm_host.h  |  2 +-
>  arch/arm64/kvm/debug.c             |  2 +-
>  arch/arm64/kvm/hyp/nvhe/debug-sr.c |  2 +-
>  arch/arm64/kvm/hyp/vhe/debug-sr.c  |  2 +-
>  6 files changed, 15 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index d436831dd706..6a523ec83415 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -281,18 +281,18 @@
>  /* Hyp Debug Configuration Register bits */
>  #define MDCR_EL2_E2TB_MASK	(UL(0x3))
>  #define MDCR_EL2_E2TB_SHIFT	(UL(24))
> -#define MDCR_EL2_TTRF		(1 << 19)
> -#define MDCR_EL2_TPMS		(1 << 14)
> +#define MDCR_EL2_TTRF		(UL(1) << 19)
> +#define MDCR_EL2_TPMS		(UL(1) << 14)
>  #define MDCR_EL2_E2PB_MASK	(UL(0x3))
>  #define MDCR_EL2_E2PB_SHIFT	(UL(12))
> -#define MDCR_EL2_TDRA		(1 << 11)
> -#define MDCR_EL2_TDOSA		(1 << 10)
> -#define MDCR_EL2_TDA		(1 << 9)
> -#define MDCR_EL2_TDE		(1 << 8)
> -#define MDCR_EL2_HPME		(1 << 7)
> -#define MDCR_EL2_TPM		(1 << 6)
> -#define MDCR_EL2_TPMCR		(1 << 5)
> -#define MDCR_EL2_HPMN_MASK	(0x1F)
> +#define MDCR_EL2_TDRA		(UL(1) << 11)
> +#define MDCR_EL2_TDOSA		(UL(1) << 10)
> +#define MDCR_EL2_TDA		(UL(1) << 9)
> +#define MDCR_EL2_TDE		(UL(1) << 8)
> +#define MDCR_EL2_HPME		(UL(1) << 7)
> +#define MDCR_EL2_TPM		(UL(1) << 6)
> +#define MDCR_EL2_TPMCR		(UL(1) << 5)
> +#define MDCR_EL2_HPMN_MASK	(UL(0x1F))
>  
>  /* For compatibility with fault code shared with 32-bit */
>  #define FSC_FAULT	ESR_ELx_FSC_FAULT
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index 9f0bf2109be7..63ead9060ab5 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -210,7 +210,7 @@ extern u64 __vgic_v3_read_vmcr(void);
>  extern void __vgic_v3_write_vmcr(u32 vmcr);
>  extern void __vgic_v3_init_lrs(void);
>  
> -extern u32 __kvm_get_mdcr_el2(void);
> +extern u64 __kvm_get_mdcr_el2(void);
>  
>  #define __KVM_EXTABLE(from, to)						\
>  	"	.pushsection	__kvm_ex_table, \"a\"\n"		\
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 347781f99b6a..4d2d974c1522 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -289,7 +289,7 @@ struct kvm_vcpu_arch {
>  
>  	/* HYP configuration */
>  	u64 hcr_el2;
> -	u32 mdcr_el2;
> +	u64 mdcr_el2;

This breaks an existing trace in debug.c::kvm_arm_setup_mdcr_el2():

	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);

which expects a 32bit value. I guess we could add an equivalent 64bit
version, or silently upgrade the tracepoint to take a 64bit value.
None of them are good solutions, but hey, something has to give...

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 06/15] KVM: arm64: Restore mdcr_el2 from vcpu
  2021-08-17  8:11 ` [PATCH v4 06/15] KVM: arm64: Restore mdcr_el2 from vcpu Fuad Tabba
  2021-08-18 13:13   ` Will Deacon
@ 2021-08-18 14:42   ` Marc Zyngier
  1 sibling, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2021-08-18 14:42 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team

On Tue, 17 Aug 2021 09:11:25 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> On deactivating traps, restore the value of mdcr_el2 from the
> newly created and preserved host value vcpu context, rather than
> directly reading the hardware register.
> 
> Up until and including this patch the two values are the same,
> i.e., the hardware register and the vcpu one. A future patch will
> be changing the value of mdcr_el2 on activating traps, and this
> ensures that its value will be restored.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_host.h       |  5 ++++-
>  arch/arm64/include/asm/kvm_hyp.h        |  2 +-
>  arch/arm64/kvm/hyp/include/hyp/switch.h |  6 +++++-
>  arch/arm64/kvm/hyp/nvhe/switch.c        | 13 +++++--------
>  arch/arm64/kvm/hyp/vhe/switch.c         | 14 +++++---------
>  arch/arm64/kvm/hyp/vhe/sysreg-sr.c      |  2 +-
>  6 files changed, 21 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 4d2d974c1522..76462c6a91ee 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -287,10 +287,13 @@ struct kvm_vcpu_arch {
>  	/* Stage 2 paging state used by the hardware on next switch */
>  	struct kvm_s2_mmu *hw_mmu;
>  
> -	/* HYP configuration */
> +	/* Values of trap registers for the guest. */
>  	u64 hcr_el2;
>  	u64 mdcr_el2;
>  
> +	/* Values of trap registers for the host before guest entry. */
> +	u64 mdcr_el2_host;

This probably should then eventually replace the per-CPU copy of
mdcr_el2 that lives in debug.c, shouldn't it?

> +
>  	/* Exception Information */
>  	struct kvm_vcpu_fault_info fault;
>  
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index 9d60b3006efc..657d0c94cf82 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr);
>  
>  #ifndef __KVM_NVHE_HYPERVISOR__
>  void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
> -void deactivate_traps_vhe_put(void);
> +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
>  #endif
>  
>  u64 __guest_enter(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index e4a2f295a394..a0e78a6027be 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -92,11 +92,15 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
>  		write_sysreg(0, pmselr_el0);
>  		write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
>  	}
> +
> +	vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
>  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>  }
>  
> -static inline void __deactivate_traps_common(void)
> +static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
>  {
> +	write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
> +
>  	write_sysreg(0, hstr_el2);
>  	if (kvm_arm_support_pmu_v3())
>  		write_sysreg(0, pmuserenr_el0);
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index f7af9688c1f7..2ea764a48958 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -69,12 +69,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
>  static void __deactivate_traps(struct kvm_vcpu *vcpu)
>  {
>  	extern char __kvm_hyp_host_vector[];
> -	u64 mdcr_el2, cptr;
> +	u64 cptr;
>  
>  	___deactivate_traps(vcpu);
>  
> -	mdcr_el2 = read_sysreg(mdcr_el2);
> -
>  	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
>  		u64 val;
>  
> @@ -92,13 +90,12 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
>  		isb();
>  	}
>  
> -	__deactivate_traps_common();
> +	vcpu->arch.mdcr_el2_host &= MDCR_EL2_HPMN_MASK |
> +				    MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
> +				    MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
>  
> -	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
> -	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
> -	mdcr_el2 |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
> +	__deactivate_traps_common(vcpu);
>  
> -	write_sysreg(mdcr_el2, mdcr_el2);

FWIW, I found this whole sequence massively confusing, and it is only
when I came to patch #7 that the various pieces did come together.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 10/15] KVM: arm64: Add config register bit definitions
  2021-08-17  8:11 ` [PATCH v4 10/15] KVM: arm64: Add config register bit definitions Fuad Tabba
@ 2021-08-18 15:16   ` Marc Zyngier
  0 siblings, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2021-08-18 15:16 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team

On Tue, 17 Aug 2021 09:11:29 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Add hardware configuration register bit definitions for HCR_EL2
> and MDCR_EL2. Future patches toggle these hyp configuration
> register bits to trap on certain accesses.
> 
> No functional change intended.
> 
> Acked-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_arm.h | 22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index a928b2dc0b0f..327120c0089f 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -12,8 +12,13 @@
>  #include <asm/types.h>
>  
>  /* Hyp Configuration Register (HCR) bits */
> +
> +#define HCR_TID5	(UL(1) << 58)
> +#define HCR_DCT		(UL(1) << 57)
>  #define HCR_ATA_SHIFT	56
>  #define HCR_ATA		(UL(1) << HCR_ATA_SHIFT)
> +#define HCR_AMVOFFEN	(UL(1) << 51)
> +#define HCR_FIEN	(UL(1) << 47)
>  #define HCR_FWB		(UL(1) << 46)
>  #define HCR_API		(UL(1) << 41)
>  #define HCR_APK		(UL(1) << 40)
> @@ -56,6 +61,7 @@
>  #define HCR_PTW		(UL(1) << 2)
>  #define HCR_SWIO	(UL(1) << 1)
>  #define HCR_VM		(UL(1) << 0)
> +#define HCR_RES0	((UL(1) << 48) | (UL(1) << 39))
>  
>  /*
>   * The bits we set in HCR:
> @@ -277,11 +283,21 @@
>  #define CPTR_EL2_TZ	(1 << 8)
>  #define CPTR_NVHE_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */
>  #define CPTR_EL2_DEFAULT	CPTR_NVHE_EL2_RES1
> +#define CPTR_NVHE_EL2_RES0	(GENMASK(63, 32) |	\
> +				 GENMASK(29, 21) |	\
> +				 GENMASK(19, 14) |	\
> +				 BIT(11))
>  
>  /* Hyp Debug Configuration Register bits */
>  #define MDCR_EL2_E2TB_MASK	(UL(0x3))
>  #define MDCR_EL2_E2TB_SHIFT	(UL(24))
> +#define MDCR_EL2_HPMFZS		(UL(1) << 36)
> +#define MDCR_EL2_HPMFZO		(UL(1) << 29)
> +#define MDCR_EL2_MTPME		(UL(1) << 28)
> +#define MDCR_EL2_TDCC		(UL(1) << 27)
> +#define MDCR_EL2_HCCD		(UL(1) << 23)

Nit: If you're aiming for completeness, you're missing MDCR_EL2.HLP
(bit 26).

>  #define MDCR_EL2_TTRF		(UL(1) << 19)
> +#define MDCR_EL2_HPMD		(UL(1) << 17)
>  #define MDCR_EL2_TPMS		(UL(1) << 14)
>  #define MDCR_EL2_E2PB_MASK	(UL(0x3))
>  #define MDCR_EL2_E2PB_SHIFT	(UL(12))
> @@ -293,6 +309,12 @@
>  #define MDCR_EL2_TPM		(UL(1) << 6)
>  #define MDCR_EL2_TPMCR		(UL(1) << 5)
>  #define MDCR_EL2_HPMN_MASK	(UL(0x1F))
> +#define MDCR_EL2_RES0		(GENMASK(63, 37) |	\
> +				 GENMASK(35, 30) |	\
> +				 GENMASK(25, 24) |	\
> +				 GENMASK(22, 20) |	\
> +				 BIT(18) |		\
> +				 GENMASK(16, 15))
>  
>  /* For compatibility with fault code shared with 32-bit */
>  #define FSC_FAULT	ESR_ELx_FSC_FAULT

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 11/15] KVM: arm64: Guest exit handlers for nVHE hyp
  2021-08-17  8:11 ` [PATCH v4 11/15] KVM: arm64: Guest exit handlers for nVHE hyp Fuad Tabba
@ 2021-08-18 16:45   ` Marc Zyngier
  2021-08-19 14:35     ` Marc Zyngier
  0 siblings, 1 reply; 30+ messages in thread
From: Marc Zyngier @ 2021-08-18 16:45 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team

On Tue, 17 Aug 2021 09:11:30 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Add an array of pointers to handlers for various trap reasons in
> nVHE code.
> 
> The current code selects how to fixup a guest on exit based on a
> series of if/else statements. Future patches will also require
> different handling for guest exists. Create an array of handlers
> to consolidate them.
> 
> No functional change intended as the array isn't populated yet.
> 
> Acked-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 43 +++++++++++++++++++++++++
>  arch/arm64/kvm/hyp/nvhe/switch.c        | 33 +++++++++++++++++++
>  2 files changed, 76 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index a0e78a6027be..5a2b89b96c67 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -409,6 +409,46 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
>  	return true;
>  }
>  
> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);

This returns an int...

> +
> +exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu);
> +
> +static exit_handle_fn kvm_get_hyp_exit_handler(struct kvm_vcpu *vcpu)
> +{
> +	return is_nvhe_hyp_code() ? kvm_get_nvhe_exit_handler(vcpu) : NULL;
> +}
> +
> +/*
> + * Allow the hypervisor to handle the exit with an exit handler if it has one.
> + *
> + * Returns true if the hypervisor handled the exit, and control should go back
> + * to the guest, or false if it hasn't.
> + */
> +static bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu)
> +{
> +	bool is_handled = false;

... which you then implicitly cast as a bool.

> +	exit_handle_fn exit_handler = kvm_get_hyp_exit_handler(vcpu);
> +
> +	if (exit_handler) {
> +		/*
> +		 * There's limited vcpu context here since it's not synced yet.
> +		 * Ensure that relevant vcpu context that might be used by the
> +		 * exit_handler is in sync before it's called and if handled.
> +		 */
> +		*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
> +		*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
> +
> +		is_handled = exit_handler(vcpu);

What does 'is_handled' mean here? By definition, any trap *must* be
handled, one way or another. By the look of it, what you really mean
is something like "I have updated the vcpu state and you'd better
reload it". Is that what it means?

> +
> +		if (is_handled) {
> +			write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR);
> +			write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR);
> +		}
> +	}
> +
> +	return is_handled;
> +}

All these functions really should be marked inline. Have you checked
how this expands on VHE? I think some compilers could be pretty
unhappy about the undefined symbol in kvm_get_hyp_exit_handler().

It is also unfortunate that we get a bunch of tests for various
flavours of traps (FP, PAuth, page faults...), only to hit yet another
decoding tree. Is there a way we could use this infrastructure for
everything?

> +
>  /*
>   * Return true when we were able to fixup the guest exit and should return to
>   * the guest, false when we should restore the host state and return to the
> @@ -496,6 +536,9 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>  			goto guest;
>  	}
>  
> +	/* Check if there's an exit handler and allow it to handle the exit. */
> +	if (kvm_hyp_handle_exit(vcpu))
> +		goto guest;
>  exit:
>  	/* Return to the host kernel and handle the exit */
>  	return false;
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index 86f3d6482935..b7f25307a7b9 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -158,6 +158,39 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
>  		write_sysreg(pmu->events_host, pmcntenset_el0);
>  }
>  
> +static exit_handle_fn hyp_exit_handlers[] = {
> +	[0 ... ESR_ELx_EC_MAX]		= NULL,
> +	[ESR_ELx_EC_WFx]		= NULL,
> +	[ESR_ELx_EC_CP15_32]		= NULL,
> +	[ESR_ELx_EC_CP15_64]		= NULL,
> +	[ESR_ELx_EC_CP14_MR]		= NULL,
> +	[ESR_ELx_EC_CP14_LS]		= NULL,
> +	[ESR_ELx_EC_CP14_64]		= NULL,
> +	[ESR_ELx_EC_HVC32]		= NULL,
> +	[ESR_ELx_EC_SMC32]		= NULL,
> +	[ESR_ELx_EC_HVC64]		= NULL,
> +	[ESR_ELx_EC_SMC64]		= NULL,
> +	[ESR_ELx_EC_SYS64]		= NULL,
> +	[ESR_ELx_EC_SVE]		= NULL,
> +	[ESR_ELx_EC_IABT_LOW]		= NULL,
> +	[ESR_ELx_EC_DABT_LOW]		= NULL,
> +	[ESR_ELx_EC_SOFTSTP_LOW]	= NULL,
> +	[ESR_ELx_EC_WATCHPT_LOW]	= NULL,
> +	[ESR_ELx_EC_BREAKPT_LOW]	= NULL,
> +	[ESR_ELx_EC_BKPT32]		= NULL,
> +	[ESR_ELx_EC_BRK64]		= NULL,
> +	[ESR_ELx_EC_FP_ASIMD]		= NULL,
> +	[ESR_ELx_EC_PAC]		= NULL,

You can safely drop all these and only keep the top one for now. This
will also keep the idiot robot at bay for until the next patch... ;-)

> +};
> +
> +exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu)
> +{
> +	u32 esr = kvm_vcpu_get_esr(vcpu);
> +	u8 esr_ec = ESR_ELx_EC(esr);
> +
> +	return hyp_exit_handlers[esr_ec];
> +}
> +
>  /* Switch to the guest for legacy non-VHE systems */
>  int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  {

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 15/15] KVM: arm64: Handle protected guests at 32 bits
  2021-08-17  8:11 ` [PATCH v4 15/15] KVM: arm64: Handle protected guests at 32 bits Fuad Tabba
@ 2021-08-19  8:10   ` Oliver Upton
  2021-08-23 10:25     ` Fuad Tabba
  0 siblings, 1 reply; 30+ messages in thread
From: Oliver Upton @ 2021-08-19  8:10 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, maz, will, james.morse, Alexandru.Elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Fuad,

On Tue, Aug 17, 2021 at 1:12 AM Fuad Tabba <tabba@google.com> wrote:
>
> Protected KVM does not support protected AArch32 guests. However,
> it is possible for the guest to force run AArch32, potentially
> causing problems. Add an extra check so that if the hypervisor
> catches the guest doing that, it can prevent the guest from
> running again by resetting vcpu->arch.target and returning
> ARM_EXCEPTION_IL.
>
> If this were to happen, The VMM can try and fix it by re-
> initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is
> likely not possible for protected VMs.
>
> Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> AArch32 systems")
>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/nvhe/switch.c | 37 ++++++++++++++++++++++++++++++++
>  1 file changed, 37 insertions(+)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index 398e62098898..0c24b7f473bf 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -20,6 +20,7 @@
>  #include <asm/kprobes.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_emulate.h>
> +#include <asm/kvm_fixed_config.h>
>  #include <asm/kvm_hyp.h>
>  #include <asm/kvm_mmu.h>
>  #include <asm/fpsimd.h>
> @@ -195,6 +196,39 @@ exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu)
>                 return NULL;
>  }
>
> +/*
> + * Some guests (e.g., protected VMs) might not be allowed to run in AArch32. The
> + * check below is based on the one in kvm_arch_vcpu_ioctl_run().
> + * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a
> + * guest from dropping to AArch32 EL0 if implemented by the CPU. If the
> + * hypervisor spots a guest in such a state ensure it is handled, and don't
> + * trust the host to spot or fix it.
> + *
> + * Returns true if the check passed and the guest run loop can continue, or
> + * false if the guest should exit to the host.
> + */
> +static bool check_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code)

This does a bit more than just check & return, so maybe call it
handle_aarch32_guest()?

> +{
> +       if (kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&

maybe initialize a local with a hyp pointer to the kvm structure.

> +           vcpu_mode_is_32bit(vcpu) &&
> +           FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0),
> +                                        PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) <
> +               ID_AA64PFR0_ELx_32BIT_64BIT) {

It may be more readable to initialize a local variable with this
feature check, i.e:

bool aarch32_allowed = FIELD_GET(...) == ID_AA64PFR0_ELx_32BIT_64BIT;

and then:

  if (kvm_vm_is_protected(kvm) && vcpu_mode_is_32bit(vcpu) &&
!aarch32_allowed) {

> +               /*
> +                * As we have caught the guest red-handed, decide that it isn't
> +                * fit for purpose anymore by making the vcpu invalid. The VMM
> +                * can try and fix it by re-initializing the vcpu with
> +                * KVM_ARM_VCPU_INIT, however, this is likely not possible for
> +                * protected VMs.
> +                */
> +               vcpu->arch.target = -1;
> +               *exit_code = ARM_EXCEPTION_IL;
> +               return false;
> +       }
> +
> +       return true;
> +}
> +
>  /* Switch to the guest for legacy non-VHE systems */
>  int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  {
> @@ -255,6 +289,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>                 /* Jump in the fire! */
>                 exit_code = __guest_enter(vcpu);
>
> +               if (unlikely(!check_aarch32_guest(vcpu, &exit_code)))
> +                       break;
> +
>                 /* And we're baaack! */
>         } while (fixup_guest_exit(vcpu, &exit_code));
>
> --
> 2.33.0.rc1.237.g0d66db33f3-goog
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 11/15] KVM: arm64: Guest exit handlers for nVHE hyp
  2021-08-18 16:45   ` Marc Zyngier
@ 2021-08-19 14:35     ` Marc Zyngier
  2021-08-23 10:21       ` Fuad Tabba
  0 siblings, 1 reply; 30+ messages in thread
From: Marc Zyngier @ 2021-08-19 14:35 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team

Hi Fuad,

On Wed, 18 Aug 2021 17:45:50 +0100,
Marc Zyngier <maz@kernel.org> wrote:
> 
> On Tue, 17 Aug 2021 09:11:30 +0100,
> Fuad Tabba <tabba@google.com> wrote:
> > 
> > Add an array of pointers to handlers for various trap reasons in
> > nVHE code.
> > 
> > The current code selects how to fixup a guest on exit based on a
> > series of if/else statements. Future patches will also require
> > different handling for guest exists. Create an array of handlers
> > to consolidate them.
> > 
> > No functional change intended as the array isn't populated yet.
> > 
> > Acked-by: Will Deacon <will@kernel.org>
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 43 +++++++++++++++++++++++++
> >  arch/arm64/kvm/hyp/nvhe/switch.c        | 33 +++++++++++++++++++
> >  2 files changed, 76 insertions(+)
> > 
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index a0e78a6027be..5a2b89b96c67 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -409,6 +409,46 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
> >  	return true;
> >  }
> >  
> > +typedef int (*exit_handle_fn)(struct kvm_vcpu *);
> 
> This returns an int...
> 
> > +
> > +exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu);
> > +
> > +static exit_handle_fn kvm_get_hyp_exit_handler(struct kvm_vcpu *vcpu)
> > +{
> > +	return is_nvhe_hyp_code() ? kvm_get_nvhe_exit_handler(vcpu) : NULL;
> > +}
> > +
> > +/*
> > + * Allow the hypervisor to handle the exit with an exit handler if it has one.
> > + *
> > + * Returns true if the hypervisor handled the exit, and control should go back
> > + * to the guest, or false if it hasn't.
> > + */
> > +static bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu)
> > +{
> > +	bool is_handled = false;
> 
> ... which you then implicitly cast as a bool.
> 
> > +	exit_handle_fn exit_handler = kvm_get_hyp_exit_handler(vcpu);
> > +
> > +	if (exit_handler) {
> > +		/*
> > +		 * There's limited vcpu context here since it's not synced yet.
> > +		 * Ensure that relevant vcpu context that might be used by the
> > +		 * exit_handler is in sync before it's called and if handled.
> > +		 */
> > +		*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
> > +		*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
> > +
> > +		is_handled = exit_handler(vcpu);
> 
> What does 'is_handled' mean here? By definition, any trap *must* be
> handled, one way or another. By the look of it, what you really mean
> is something like "I have updated the vcpu state and you'd better
> reload it". Is that what it means?
> 
> > +
> > +		if (is_handled) {
> > +			write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR);
> > +			write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR);
> > +		}
> > +	}
> > +
> > +	return is_handled;
> > +}
> 
> All these functions really should be marked inline. Have you checked
> how this expands on VHE? I think some compilers could be pretty
> unhappy about the undefined symbol in kvm_get_hyp_exit_handler().
> 
> It is also unfortunate that we get a bunch of tests for various
> flavours of traps (FP, PAuth, page faults...), only to hit yet another
> decoding tree. Is there a way we could use this infrastructure for
> everything?

I realised that I wasn't very forthcoming here. I've decided to put
the code where my mouth is and pushed out a branch [1] with your first
10 patches, followed by my own take on this particular problem. It
compiles, and even managed to boot a Debian guest on a nVHE box.

As you can see, most of the early exit handling is now moved to
specific handlers, unifying the handling. For the protected mode, you
can provide your own handler array (just hack
kvm_get_exit_handler_array() to return something else), which will do
the right thing as long as you call into the existing handlers first.

When it comes to the ELR/SPSR handling, it is better left to the
individual handlers (which we already do in some cases, see how we
skip instructions, for example).

Please let me know what you think.

Thanks,

	M.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/pkvm-fixed-features

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs
  2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
                   ` (14 preceding siblings ...)
  2021-08-17  8:11 ` [PATCH v4 15/15] KVM: arm64: Handle protected guests at 32 bits Fuad Tabba
@ 2021-08-20 10:34 ` Marc Zyngier
  2021-08-23 10:23   ` Fuad Tabba
  15 siblings, 1 reply; 30+ messages in thread
From: Marc Zyngier @ 2021-08-20 10:34 UTC (permalink / raw)
  To: kvmarm, Fuad Tabba
  Cc: oupton, james.morse, drjones, mark.rutland, alexandru.elisei,
	kvm, linux-arm-kernel, kernel-team, suzuki.poulose, will,
	pbonzini, christoffer.dall, qperret

On Tue, 17 Aug 2021 09:11:19 +0100, Fuad Tabba wrote:
> Changes since v3 [1]:
> - Redid calculating restricted values of feature register fields, ensuring that
>   the code distinguishes between unsigned and (potentially in the future)
>   signed fields (Will)
> - Refactoring and fixes (Drew, Will)
> - More documentation and comments (Oliver, Will)
> - Dropped patch "Restrict protected VM capabilities", since it should come with
>   or after the user ABI series for pKVM (Will)
> - Carried Will's acks
> 
> [...]

I've taken the first 10 patches of this series in order to
progress it. I also stashed a fixlet on top to address the
tracepoint issue.

Hopefully we can resolve the rest of the issues quickly.

[01/15] KVM: arm64: placeholder to check if VM is protected
        commit: 2ea7f655800b00b109951f22539fe2025add210b
[02/15] KVM: arm64: Remove trailing whitespace in comment
        commit: e6bc555c96990046d680ff92c8e2e7b6b43b509f
[03/15] KVM: arm64: MDCR_EL2 is a 64-bit register
        commit: d6c850dd6ce9ce4b410142a600d8c34dc041d860
[04/15] KVM: arm64: Fix names of config register fields
        commit: dabb1667d8573302712a75530cccfee8f3ffff84
[05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
        commit: f76f89e2f73d93720cfcad7fb7b24d022b2846bf
[06/15] KVM: arm64: Restore mdcr_el2 from vcpu
        commit: 1460b4b25fde52cbee746c11a4b1d3185f2e2847
[07/15] KVM: arm64: Keep mdcr_el2's value as set by __init_el2_debug
        commit: 12849badc6d2456f15f8f2c93037628d5176810b
[08/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
        commit: cd496228fd8de2e82b6636d3d89105631ea2b69c
[09/15] KVM: arm64: Add feature register flag definitions
        commit: 95b54c3e4c92b9185b15c83e8baab9ba312195f6
[10/15] KVM: arm64: Add config register bit definitions
        commit: 2d701243b9f231b5d7f9a8cb81870650d3eb32bc

Cheers,

	M.
-- 
Without deviation from the norm, progress is not possible.



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 11/15] KVM: arm64: Guest exit handlers for nVHE hyp
  2021-08-19 14:35     ` Marc Zyngier
@ 2021-08-23 10:21       ` Fuad Tabba
  2021-08-23 12:10         ` Marc Zyngier
  0 siblings, 1 reply; 30+ messages in thread
From: Fuad Tabba @ 2021-08-23 10:21 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team

Hi Marc,

On Thu, Aug 19, 2021 at 3:36 PM Marc Zyngier <maz@kernel.org> wrote:
>
> Hi Fuad,
>
> On Wed, 18 Aug 2021 17:45:50 +0100,
> Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Tue, 17 Aug 2021 09:11:30 +0100,
> > Fuad Tabba <tabba@google.com> wrote:
> > >
> > > Add an array of pointers to handlers for various trap reasons in
> > > nVHE code.
> > >
> > > The current code selects how to fixup a guest on exit based on a
> > > series of if/else statements. Future patches will also require
> > > different handling for guest exists. Create an array of handlers
> > > to consolidate them.
> > >
> > > No functional change intended as the array isn't populated yet.
> > >
> > > Acked-by: Will Deacon <will@kernel.org>
> > > Signed-off-by: Fuad Tabba <tabba@google.com>
> > > ---
> > >  arch/arm64/kvm/hyp/include/hyp/switch.h | 43 +++++++++++++++++++++++++
> > >  arch/arm64/kvm/hyp/nvhe/switch.c        | 33 +++++++++++++++++++
> > >  2 files changed, 76 insertions(+)
> > >
> > > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > > index a0e78a6027be..5a2b89b96c67 100644
> > > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > > @@ -409,6 +409,46 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
> > >     return true;
> > >  }
> > >
> > > +typedef int (*exit_handle_fn)(struct kvm_vcpu *);
> >
> > This returns an int...
> >
> > > +
> > > +exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu);
> > > +
> > > +static exit_handle_fn kvm_get_hyp_exit_handler(struct kvm_vcpu *vcpu)
> > > +{
> > > +   return is_nvhe_hyp_code() ? kvm_get_nvhe_exit_handler(vcpu) : NULL;
> > > +}
> > > +
> > > +/*
> > > + * Allow the hypervisor to handle the exit with an exit handler if it has one.
> > > + *
> > > + * Returns true if the hypervisor handled the exit, and control should go back
> > > + * to the guest, or false if it hasn't.
> > > + */
> > > +static bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu)
> > > +{
> > > +   bool is_handled = false;
> >
> > ... which you then implicitly cast as a bool.
> >
> > > +   exit_handle_fn exit_handler = kvm_get_hyp_exit_handler(vcpu);
> > > +
> > > +   if (exit_handler) {
> > > +           /*
> > > +            * There's limited vcpu context here since it's not synced yet.
> > > +            * Ensure that relevant vcpu context that might be used by the
> > > +            * exit_handler is in sync before it's called and if handled.
> > > +            */
> > > +           *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
> > > +           *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
> > > +
> > > +           is_handled = exit_handler(vcpu);
> >
> > What does 'is_handled' mean here? By definition, any trap *must* be
> > handled, one way or another. By the look of it, what you really mean
> > is something like "I have updated the vcpu state and you'd better
> > reload it". Is that what it means?
> >
> > > +
> > > +           if (is_handled) {
> > > +                   write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR);
> > > +                   write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR);
> > > +           }
> > > +   }
> > > +
> > > +   return is_handled;
> > > +}
> >
> > All these functions really should be marked inline. Have you checked
> > how this expands on VHE? I think some compilers could be pretty
> > unhappy about the undefined symbol in kvm_get_hyp_exit_handler().
> >
> > It is also unfortunate that we get a bunch of tests for various
> > flavours of traps (FP, PAuth, page faults...), only to hit yet another
> > decoding tree. Is there a way we could use this infrastructure for
> > everything?
>
> I realised that I wasn't very forthcoming here. I've decided to put
> the code where my mouth is and pushed out a branch [1] with your first
> 10 patches, followed by my own take on this particular problem. It
> compiles, and even managed to boot a Debian guest on a nVHE box.
>
> As you can see, most of the early exit handling is now moved to
> specific handlers, unifying the handling. For the protected mode, you
> can provide your own handler array (just hack
> kvm_get_exit_handler_array() to return something else), which will do
> the right thing as long as you call into the existing handlers first.
> When it comes to the ELR/SPSR handling, it is better left to the
> individual handlers (which we already do in some cases, see how we
> skip instructions, for example).
> Please let me know what you think.

Thanks a lot for this and sorry for being late to reply. I've been travelling.

I think that your proposal looks great. All handling is consolidated
now and handling for protected VMs can just be added on top. There are
some small issues with what parameters we need (e.g., passing struct
kvm to kvm_get_exit_handler_array), but I will sort them out and
submit them in the next round.

Cheers,
/fuad

> Thanks,
>
>         M.
>
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/pkvm-fixed-features
>
> --
> Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs
  2021-08-20 10:34 ` [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Marc Zyngier
@ 2021-08-23 10:23   ` Fuad Tabba
  0 siblings, 0 replies; 30+ messages in thread
From: Fuad Tabba @ 2021-08-23 10:23 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, oupton, james.morse, drjones, mark.rutland,
	alexandru.elisei, kvm, linux-arm-kernel, kernel-team,
	suzuki.poulose, will, pbonzini, christoffer.dall, qperret

Hi Marc,

On Fri, Aug 20, 2021 at 11:34 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Tue, 17 Aug 2021 09:11:19 +0100, Fuad Tabba wrote:
> > Changes since v3 [1]:
> > - Redid calculating restricted values of feature register fields, ensuring that
> >   the code distinguishes between unsigned and (potentially in the future)
> >   signed fields (Will)
> > - Refactoring and fixes (Drew, Will)
> > - More documentation and comments (Oliver, Will)
> > - Dropped patch "Restrict protected VM capabilities", since it should come with
> >   or after the user ABI series for pKVM (Will)
> > - Carried Will's acks
> >
> > [...]
>
> I've taken the first 10 patches of this series in order to
> progress it. I also stashed a fixlet on top to address the
> tracepoint issue.
>
> Hopefully we can resolve the rest of the issues quickly.

Thanks. I am working on a patch series with the remaining patches to
address the issues. Stay tuned :)

Cheers,
/fuad

> [01/15] KVM: arm64: placeholder to check if VM is protected
>         commit: 2ea7f655800b00b109951f22539fe2025add210b
> [02/15] KVM: arm64: Remove trailing whitespace in comment
>         commit: e6bc555c96990046d680ff92c8e2e7b6b43b509f
> [03/15] KVM: arm64: MDCR_EL2 is a 64-bit register
>         commit: d6c850dd6ce9ce4b410142a600d8c34dc041d860
> [04/15] KVM: arm64: Fix names of config register fields
>         commit: dabb1667d8573302712a75530cccfee8f3ffff84
> [05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
>         commit: f76f89e2f73d93720cfcad7fb7b24d022b2846bf
> [06/15] KVM: arm64: Restore mdcr_el2 from vcpu
>         commit: 1460b4b25fde52cbee746c11a4b1d3185f2e2847
> [07/15] KVM: arm64: Keep mdcr_el2's value as set by __init_el2_debug
>         commit: 12849badc6d2456f15f8f2c93037628d5176810b
> [08/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
>         commit: cd496228fd8de2e82b6636d3d89105631ea2b69c
> [09/15] KVM: arm64: Add feature register flag definitions
>         commit: 95b54c3e4c92b9185b15c83e8baab9ba312195f6
> [10/15] KVM: arm64: Add config register bit definitions
>         commit: 2d701243b9f231b5d7f9a8cb81870650d3eb32bc
>
> Cheers,
>
>         M.
> --
> Without deviation from the norm, progress is not possible.
>
>

On Fri, Aug 20, 2021 at 11:34 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Tue, 17 Aug 2021 09:11:19 +0100, Fuad Tabba wrote:
> > Changes since v3 [1]:
> > - Redid calculating restricted values of feature register fields, ensuring that
> >   the code distinguishes between unsigned and (potentially in the future)
> >   signed fields (Will)
> > - Refactoring and fixes (Drew, Will)
> > - More documentation and comments (Oliver, Will)
> > - Dropped patch "Restrict protected VM capabilities", since it should come with
> >   or after the user ABI series for pKVM (Will)
> > - Carried Will's acks
> >
> > [...]
>
> I've taken the first 10 patches of this series in order to
> progress it. I also stashed a fixlet on top to address the
> tracepoint issue.
>
> Hopefully we can resolve the rest of the issues quickly.
>
> [01/15] KVM: arm64: placeholder to check if VM is protected
>         commit: 2ea7f655800b00b109951f22539fe2025add210b
> [02/15] KVM: arm64: Remove trailing whitespace in comment
>         commit: e6bc555c96990046d680ff92c8e2e7b6b43b509f
> [03/15] KVM: arm64: MDCR_EL2 is a 64-bit register
>         commit: d6c850dd6ce9ce4b410142a600d8c34dc041d860
> [04/15] KVM: arm64: Fix names of config register fields
>         commit: dabb1667d8573302712a75530cccfee8f3ffff84
> [05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse
>         commit: f76f89e2f73d93720cfcad7fb7b24d022b2846bf
> [06/15] KVM: arm64: Restore mdcr_el2 from vcpu
>         commit: 1460b4b25fde52cbee746c11a4b1d3185f2e2847
> [07/15] KVM: arm64: Keep mdcr_el2's value as set by __init_el2_debug
>         commit: 12849badc6d2456f15f8f2c93037628d5176810b
> [08/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch
>         commit: cd496228fd8de2e82b6636d3d89105631ea2b69c
> [09/15] KVM: arm64: Add feature register flag definitions
>         commit: 95b54c3e4c92b9185b15c83e8baab9ba312195f6
> [10/15] KVM: arm64: Add config register bit definitions
>         commit: 2d701243b9f231b5d7f9a8cb81870650d3eb32bc
>
> Cheers,
>
>         M.
> --
> Without deviation from the norm, progress is not possible.
>
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 15/15] KVM: arm64: Handle protected guests at 32 bits
  2021-08-19  8:10   ` Oliver Upton
@ 2021-08-23 10:25     ` Fuad Tabba
  0 siblings, 0 replies; 30+ messages in thread
From: Fuad Tabba @ 2021-08-23 10:25 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvmarm, maz, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, qperret, kvm,
	linux-arm-kernel, kernel-team

Hi Oliver,

On Thu, Aug 19, 2021 at 9:10 AM Oliver Upton <oupton@google.com> wrote:
>
> Hi Fuad,
>
> On Tue, Aug 17, 2021 at 1:12 AM Fuad Tabba <tabba@google.com> wrote:
> >
> > Protected KVM does not support protected AArch32 guests. However,
> > it is possible for the guest to force run AArch32, potentially
> > causing problems. Add an extra check so that if the hypervisor
> > catches the guest doing that, it can prevent the guest from
> > running again by resetting vcpu->arch.target and returning
> > ARM_EXCEPTION_IL.
> >
> > If this were to happen, The VMM can try and fix it by re-
> > initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is
> > likely not possible for protected VMs.
> >
> > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> > AArch32 systems")
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/kvm/hyp/nvhe/switch.c | 37 ++++++++++++++++++++++++++++++++
> >  1 file changed, 37 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> > index 398e62098898..0c24b7f473bf 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> > @@ -20,6 +20,7 @@
> >  #include <asm/kprobes.h>
> >  #include <asm/kvm_asm.h>
> >  #include <asm/kvm_emulate.h>
> > +#include <asm/kvm_fixed_config.h>
> >  #include <asm/kvm_hyp.h>
> >  #include <asm/kvm_mmu.h>
> >  #include <asm/fpsimd.h>
> > @@ -195,6 +196,39 @@ exit_handle_fn kvm_get_nvhe_exit_handler(struct kvm_vcpu *vcpu)
> >                 return NULL;
> >  }
> >
> > +/*
> > + * Some guests (e.g., protected VMs) might not be allowed to run in AArch32. The
> > + * check below is based on the one in kvm_arch_vcpu_ioctl_run().
> > + * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a
> > + * guest from dropping to AArch32 EL0 if implemented by the CPU. If the
> > + * hypervisor spots a guest in such a state ensure it is handled, and don't
> > + * trust the host to spot or fix it.
> > + *
> > + * Returns true if the check passed and the guest run loop can continue, or
> > + * false if the guest should exit to the host.
> > + */
> > +static bool check_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code)
>
> This does a bit more than just check & return, so maybe call it
> handle_aarch32_guest()?
>
> > +{
> > +       if (kvm_vm_is_protected(kern_hyp_va(vcpu->kvm)) &&
>
> maybe initialize a local with a hyp pointer to the kvm structure.

Will do.

> > +           vcpu_mode_is_32bit(vcpu) &&
> > +           FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0),
> > +                                        PVM_ID_AA64PFR0_RESTRICT_UNSIGNED) <
> > +               ID_AA64PFR0_ELx_32BIT_64BIT) {
>
> It may be more readable to initialize a local variable with this
> feature check, i.e:
>
> bool aarch32_allowed = FIELD_GET(...) == ID_AA64PFR0_ELx_32BIT_64BIT;
>
> and then:
>
>   if (kvm_vm_is_protected(kvm) && vcpu_mode_is_32bit(vcpu) &&
> !aarch32_allowed) {

I agree.

Thanks,
/fuad

> > +               /*
> > +                * As we have caught the guest red-handed, decide that it isn't
> > +                * fit for purpose anymore by making the vcpu invalid. The VMM
> > +                * can try and fix it by re-initializing the vcpu with
> > +                * KVM_ARM_VCPU_INIT, however, this is likely not possible for
> > +                * protected VMs.
> > +                */
> > +               vcpu->arch.target = -1;
> > +               *exit_code = ARM_EXCEPTION_IL;
> > +               return false;
> > +       }
> > +
> > +       return true;
> > +}
> > +
> >  /* Switch to the guest for legacy non-VHE systems */
> >  int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
> >  {
> > @@ -255,6 +289,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
> >                 /* Jump in the fire! */
> >                 exit_code = __guest_enter(vcpu);
> >
> > +               if (unlikely(!check_aarch32_guest(vcpu, &exit_code)))
> > +                       break;
> > +
> >                 /* And we're baaack! */
> >         } while (fixup_guest_exit(vcpu, &exit_code));
> >
> > --
> > 2.33.0.rc1.237.g0d66db33f3-goog
> >

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v4 11/15] KVM: arm64: Guest exit handlers for nVHE hyp
  2021-08-23 10:21       ` Fuad Tabba
@ 2021-08-23 12:10         ` Marc Zyngier
  0 siblings, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2021-08-23 12:10 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, will, james.morse, alexandru.elisei, suzuki.poulose,
	mark.rutland, christoffer.dall, pbonzini, drjones, oupton,
	qperret, kvm, linux-arm-kernel, kernel-team

Hi Fuad,

On Mon, 23 Aug 2021 11:21:05 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> On Thu, Aug 19, 2021 at 3:36 PM Marc Zyngier <maz@kernel.org> wrote:
> > I realised that I wasn't very forthcoming here. I've decided to put
> > the code where my mouth is and pushed out a branch [1] with your first
> > 10 patches, followed by my own take on this particular problem. It
> > compiles, and even managed to boot a Debian guest on a nVHE box.
> >
> > As you can see, most of the early exit handling is now moved to
> > specific handlers, unifying the handling. For the protected mode, you
> > can provide your own handler array (just hack
> > kvm_get_exit_handler_array() to return something else), which will do
> > the right thing as long as you call into the existing handlers first.
> > When it comes to the ELR/SPSR handling, it is better left to the
> > individual handlers (which we already do in some cases, see how we
> > skip instructions, for example).
> > Please let me know what you think.
> 
> Thanks a lot for this and sorry for being late to reply. I've been
> travelling.

No worries, it should be me who apologies for getting to this that late.

> I think that your proposal looks great. All handling is consolidated
> now and handling for protected VMs can just be added on top. There are
> some small issues with what parameters we need (e.g., passing struct
> kvm to kvm_get_exit_handler_array), but I will sort them out and
> submit them in the next round.

OK. Please base these changes on top of the three patches in my
branch, which I will update with actual commit messages.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2021-08-23 12:10 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-17  8:11 [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Fuad Tabba
2021-08-17  8:11 ` [PATCH v4 01/15] KVM: arm64: placeholder to check if VM is protected Fuad Tabba
2021-08-17  8:11 ` [PATCH v4 02/15] KVM: arm64: Remove trailing whitespace in comment Fuad Tabba
2021-08-17  8:11 ` [PATCH v4 03/15] KVM: arm64: MDCR_EL2 is a 64-bit register Fuad Tabba
2021-08-18 14:32   ` Marc Zyngier
2021-08-17  8:11 ` [PATCH v4 04/15] KVM: arm64: Fix names of config register fields Fuad Tabba
2021-08-17  8:11 ` [PATCH v4 05/15] KVM: arm64: Refactor sys_regs.h,c for nVHE reuse Fuad Tabba
2021-08-17  8:11 ` [PATCH v4 06/15] KVM: arm64: Restore mdcr_el2 from vcpu Fuad Tabba
2021-08-18 13:13   ` Will Deacon
2021-08-18 14:42   ` Marc Zyngier
2021-08-17  8:11 ` [PATCH v4 07/15] KVM: arm64: Keep mdcr_el2's value as set by __init_el2_debug Fuad Tabba
2021-08-18 13:17   ` Will Deacon
2021-08-17  8:11 ` [PATCH v4 08/15] KVM: arm64: Track value of cptr_el2 in struct kvm_vcpu_arch Fuad Tabba
2021-08-17  8:11 ` [PATCH v4 09/15] KVM: arm64: Add feature register flag definitions Fuad Tabba
2021-08-18 13:21   ` Will Deacon
2021-08-17  8:11 ` [PATCH v4 10/15] KVM: arm64: Add config register bit definitions Fuad Tabba
2021-08-18 15:16   ` Marc Zyngier
2021-08-17  8:11 ` [PATCH v4 11/15] KVM: arm64: Guest exit handlers for nVHE hyp Fuad Tabba
2021-08-18 16:45   ` Marc Zyngier
2021-08-19 14:35     ` Marc Zyngier
2021-08-23 10:21       ` Fuad Tabba
2021-08-23 12:10         ` Marc Zyngier
2021-08-17  8:11 ` [PATCH v4 12/15] KVM: arm64: Add trap handlers for protected VMs Fuad Tabba
2021-08-17  8:11 ` [PATCH v4 13/15] KVM: arm64: Move sanitized copies of CPU features Fuad Tabba
2021-08-17  8:11 ` [PATCH v4 14/15] KVM: arm64: Trap access to pVM restricted features Fuad Tabba
2021-08-17  8:11 ` [PATCH v4 15/15] KVM: arm64: Handle protected guests at 32 bits Fuad Tabba
2021-08-19  8:10   ` Oliver Upton
2021-08-23 10:25     ` Fuad Tabba
2021-08-20 10:34 ` [PATCH v4 00/15] KVM: arm64: Fixed features for protected VMs Marc Zyngier
2021-08-23 10:23   ` Fuad Tabba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).