kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/37] Transform the host into a vCPU
@ 2020-07-15 18:44 Andrew Scull
  2020-07-15 18:44 ` [PATCH 01/37] smccc: Make constants available to assembly Andrew Scull
                   ` (36 more replies)
  0 siblings, 37 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

As part of the effort to isolate hyp from the host on nVHE, this series
provides hyp with its very own context and views the host as a vcpu from
the point of view of context switching.

The journey begins by preparing for hyp-init to instantiate a run loop
in hyp that `__guest_enter`s back into the host. The interfaces then
migrate to SMCCC rather than the raw function pointer intrface of today.

Next, the host state is fully migrated into its vcpu leaving a distinct
context for hyp and host. Finally, the save and restore paths of guests
and the host are unified such that __kvm_vcpu_run can switch to and run
any vcpu it is given.

It's a long series, but that seems to be the way Marc likes things
around here, hehe. I've tried to keep the patches simple where possible
but let me know if there's ever too much in one go and I'll try and help
you out.

This has been lightly tested on qemu with both VHE and nVHE booting VMs.
More rigorous testing will be needed.

The first patch is already in arm64/for-next/misc 7af928851508
The second patch can also be seen in <20200713210505.2959828-2-ascull@google.com>

Andrew Scull (37):
  smccc: Make constants available to assembly
  KVM: arm64: Move clearing of vcpu debug dirty bit
  KVM: arm64: Track running vCPU outside of the CPU context
  KVM: arm64: nVHE: Pass pointers consistently to hyp-init
  KVM: arm64: nVHE: Break out of the hyp-init idmap
  KVM: arm64: Only check pending interrupts if it would trap
  KVM: arm64: Separate SError detection from VAXorcism
  KVM: arm64: nVHE: Introduce a hyp run loop for the host
  smccc: Cast arguments to unsigned long
  KVM: arm64: nVHE: Migrate hyp interface to SMCCC
  KVM: arm64: nVHE: Migrate hyp-init to SMCCC
  KVM: arm64: nVHE: Fix pointers during SMCCC convertion
  KVM: arm64: Rename workaround 2 helpers
  KVM: arm64: nVHE: Use __kvm_vcpu_run for the host vcpu
  KVM: arm64: Share some context save and restore macros
  KVM: arm64: nVHE: Handle stub HVCs in the host loop
  KVM: arm64: nVHE: Store host sysregs in host vcpu
  KVM: arm64: nVHE: Access pmu_events directly in kvm_host_data
  KVM: arm64: nVHE: Drop host_ctxt argument for context switching
  KVM: arm64: nVHE: Use host vcpu context for host debug state
  KVM: arm64: Move host debug state from vcpu to percpu
  KVM: arm64: nVHE: Store host's mdcr_el2 and hcr_el2 in its vcpu
  KVM: arm64: Skip __hyp_panic and go direct to hyp_panic
  KVM: arm64: Break apart kvm_host_data
  KVM: arm64: nVHE: Unify sysreg state saving paths
  KVM: arm64: nVHE: Unify 32-bit sysreg saving paths
  KVM: arm64: nVHE: Unify vgic save and restore
  KVM: arm64: nVHE: Unify fpexc32 saving paths
  KVM: arm64: nVHE: Separate the save and restore of debug state
  KVM: arm64: nVHE: Remove MMU assumption in speculative AT workaround
  KVM: arm64: Move speculative AT ISBs into context
  KVM: arm64: nVHE: Unify sysreg state restoration paths
  KVM: arm64: Remove __activate_vm wrapper
  KVM: arm64: nVHE: Unify timer restore paths
  KVM: arm64: nVHE: Unify PMU event restoration paths
  KVM: arm64: nVHE: Unify GIC PMR restoration paths
  KVM: arm64: Separate save and restore of vcpu trap state

 arch/arm64/include/asm/kvm_asm.h           |  73 +++++-
 arch/arm64/include/asm/kvm_host.h          |  57 ++---
 arch/arm64/include/asm/kvm_hyp.h           |  33 ++-
 arch/arm64/include/asm/kvm_mmu.h           |   7 -
 arch/arm64/kernel/asm-offsets.c            |   2 -
 arch/arm64/kernel/image-vars.h             |   6 +-
 arch/arm64/kvm/Makefile                    |   2 +-
 arch/arm64/kvm/arm.c                       |  75 +++++-
 arch/arm64/kvm/debug.c                     |   2 +
 arch/arm64/kvm/hyp.S                       |  34 ---
 arch/arm64/kvm/hyp/entry.S                 |  92 +++----
 arch/arm64/kvm/hyp/hyp-entry.S             |  81 +-----
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h  |  49 +---
 arch/arm64/kvm/hyp/include/hyp/switch.h    |  30 ++-
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  46 ++--
 arch/arm64/kvm/hyp/nvhe/Makefile           |   3 +-
 arch/arm64/kvm/hyp/nvhe/debug-sr.c         |  28 +-
 arch/arm64/kvm/hyp/nvhe/hyp-init.S         |  86 ++++---
 arch/arm64/kvm/hyp/nvhe/hyp-main.c         | 218 ++++++++++++++++
 arch/arm64/kvm/hyp/nvhe/hyp-start.S        |  49 ++++
 arch/arm64/kvm/hyp/nvhe/switch.c           | 282 +++++++++++----------
 arch/arm64/kvm/hyp/nvhe/timer-sr.c         |  35 +--
 arch/arm64/kvm/hyp/nvhe/tlb.c              |  19 +-
 arch/arm64/kvm/hyp/vhe/debug-sr.c          |  34 ++-
 arch/arm64/kvm/hyp/vhe/switch.c            |  30 +--
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c         |   4 +-
 arch/arm64/kvm/hyp/vhe/tlb.c               |   4 +-
 arch/arm64/kvm/pmu.c                       |  28 +-
 arch/arm64/kvm/vgic/vgic-v3.c              |   4 +-
 include/linux/arm-smccc.h                  |  64 ++---
 30 files changed, 873 insertions(+), 604 deletions(-)
 delete mode 100644 arch/arm64/kvm/hyp.S
 create mode 100644 arch/arm64/kvm/hyp/nvhe/hyp-main.c
 create mode 100644 arch/arm64/kvm/hyp/nvhe/hyp-start.S

-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 01/37] smccc: Make constants available to assembly
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 02/37] KVM: arm64: Move clearing of vcpu debug dirty bit Andrew Scull
                   ` (35 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

These do no need to be restricted only to C code so move them out next
to the other constants in the file.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 include/linux/arm-smccc.h | 44 +++++++++++++++++++--------------------
 1 file changed, 22 insertions(+), 22 deletions(-)

diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index 56d6a5c6e353..efcbde731f03 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -81,6 +81,28 @@
 			   ARM_SMCCC_SMC_32,				\
 			   0, 0x7fff)
 
+/* Paravirtualised time calls (defined by ARM DEN0057A) */
+#define ARM_SMCCC_HV_PV_TIME_FEATURES				\
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,			\
+			   ARM_SMCCC_SMC_64,			\
+			   ARM_SMCCC_OWNER_STANDARD_HYP,	\
+			   0x20)
+
+#define ARM_SMCCC_HV_PV_TIME_ST					\
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,			\
+			   ARM_SMCCC_SMC_64,			\
+			   ARM_SMCCC_OWNER_STANDARD_HYP,	\
+			   0x21)
+
+/*
+ * Return codes defined in ARM DEN 0070A
+ * ARM DEN 0070A is now merged/consolidated into ARM DEN 0028 C
+ */
+#define SMCCC_RET_SUCCESS			0
+#define SMCCC_RET_NOT_SUPPORTED			-1
+#define SMCCC_RET_NOT_REQUIRED			-2
+#define SMCCC_RET_INVALID_PARAMETER		-3
+
 #ifndef __ASSEMBLY__
 
 #include <linux/linkage.h>
@@ -331,15 +353,6 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
  */
 #define arm_smccc_1_1_hvc(...)	__arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__)
 
-/*
- * Return codes defined in ARM DEN 0070A
- * ARM DEN 0070A is now merged/consolidated into ARM DEN 0028 C
- */
-#define SMCCC_RET_SUCCESS			0
-#define SMCCC_RET_NOT_SUPPORTED			-1
-#define SMCCC_RET_NOT_REQUIRED			-2
-#define SMCCC_RET_INVALID_PARAMETER		-3
-
 /*
  * Like arm_smccc_1_1* but always returns SMCCC_RET_NOT_SUPPORTED.
  * Used when the SMCCC conduit is not defined. The empty asm statement
@@ -385,18 +398,5 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
 		method;							\
 	})
 
-/* Paravirtualised time calls (defined by ARM DEN0057A) */
-#define ARM_SMCCC_HV_PV_TIME_FEATURES				\
-	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,			\
-			   ARM_SMCCC_SMC_64,			\
-			   ARM_SMCCC_OWNER_STANDARD_HYP,	\
-			   0x20)
-
-#define ARM_SMCCC_HV_PV_TIME_ST					\
-	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,			\
-			   ARM_SMCCC_SMC_64,			\
-			   ARM_SMCCC_OWNER_STANDARD_HYP,	\
-			   0x21)
-
 #endif /*__ASSEMBLY__*/
 #endif /*__LINUX_ARM_SMCCC_H*/
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 02/37] KVM: arm64: Move clearing of vcpu debug dirty bit
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
  2020-07-15 18:44 ` [PATCH 01/37] smccc: Make constants available to assembly Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 03/37] KVM: arm64: Track running vCPU outside of the CPU context Andrew Scull
                   ` (34 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

Note: this has been sent out to kvmarm separately

The dirty bit was previously being cleared by hyp however it is more
consistent to leave the management of the bit outside of hyp and for hyp
to read and respect the state it is given.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_host.h         | 2 +-
 arch/arm64/kvm/debug.c                    | 2 ++
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h | 2 --
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e1a32c0707bb..b06f24b5f443 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -404,7 +404,7 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
+#define KVM_ARM64_DEBUG_DIRTY		(1 << 0) /* vcpu is using debug */
 #define KVM_ARM64_FP_ENABLED		(1 << 1) /* guest FP regs loaded */
 #define KVM_ARM64_FP_HOST		(1 << 2) /* host FP regs loaded */
 #define KVM_ARM64_HOST_SVE_IN_USE	(1 << 3) /* backup for host TIF_SVE */
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 7a7e425616b5..e9932618a362 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -209,6 +209,8 @@ void kvm_arm_clear_debug(struct kvm_vcpu *vcpu)
 {
 	trace_kvm_arm_clear_debug(vcpu->guest_debug);
 
+	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
+
 	if (vcpu->guest_debug) {
 		restore_guest_debug_regs(vcpu);
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
index 0297dc63988c..50ca5d048017 100644
--- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
@@ -161,8 +161,6 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
 
 	__debug_save_state(guest_dbg, guest_ctxt);
 	__debug_restore_state(host_dbg, host_ctxt);
-
-	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
 }
 
 #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 03/37] KVM: arm64: Track running vCPU outside of the CPU context
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
  2020-07-15 18:44 ` [PATCH 01/37] smccc: Make constants available to assembly Andrew Scull
  2020-07-15 18:44 ` [PATCH 02/37] KVM: arm64: Move clearing of vcpu debug dirty bit Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 04/37] KVM: arm64: nVHE: Pass pointers consistently to hyp-init Andrew Scull
                   ` (33 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

The running vCPU has little to do with the CPU context so extract it to
a percpu variable.

Unfortunately, the __hyp_running_vcpu field was also being used to
identify the CPU context of the host, in the nVHE speculative AT
workaround, since it will be non-NULL for the host and NULL for the
guests. This is handled by having an explicit identificaiton of the host
context which also makes the workaround conditions much easier to
comprehend.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_asm.h           |  5 -----
 arch/arm64/include/asm/kvm_host.h          |  4 +++-
 arch/arm64/include/asm/kvm_hyp.h           |  2 ++
 arch/arm64/kernel/asm-offsets.c            |  1 -
 arch/arm64/kernel/image-vars.h             |  1 +
 arch/arm64/kvm/arm.c                       | 10 ++++++++++
 arch/arm64/kvm/hyp/hyp-entry.S             | 10 +++++-----
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  4 ++--
 arch/arm64/kvm/hyp/nvhe/switch.c           |  4 ++--
 arch/arm64/kvm/hyp/vhe/switch.c            |  5 ++---
 10 files changed, 27 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index fb1a922b31ba..ebe9d582f360 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -188,11 +188,6 @@ extern char __smccc_workaround_1_smc[__SMCCC_WORKAROUND_1_SMC_SZ];
 	add	\reg, \reg, #HOST_DATA_CONTEXT
 .endm
 
-.macro get_vcpu_ptr vcpu, ctxt
-	get_host_ctxt \ctxt, \vcpu
-	ldr	\vcpu, [\ctxt, #HOST_CONTEXT_VCPU]
-.endm
-
 #endif
 
 #endif /* __ARM_KVM_ASM_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index b06f24b5f443..67a760d08b6e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -260,7 +260,7 @@ struct kvm_cpu_context {
 		u32 copro[NR_COPRO_REGS];
 	};
 
-	struct kvm_vcpu *__hyp_running_vcpu;
+	bool is_host;
 };
 
 struct kvm_pmu_events {
@@ -578,6 +578,8 @@ DECLARE_PER_CPU(kvm_host_data_t, kvm_host_data);
 
 static inline void kvm_init_host_cpu_context(struct kvm_cpu_context *cpu_ctxt)
 {
+	cpu_ctxt->is_host = true;
+
 	/* The host's MPIDR is immutable, so let's set it up at boot time */
 	ctxt_sys_reg(cpu_ctxt, MPIDR_EL1) = read_cpuid_mpidr();
 }
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 46689e7db46c..07745d9c49fc 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -12,6 +12,8 @@
 #include <asm/alternative.h>
 #include <asm/sysreg.h>
 
+DECLARE_PER_CPU(struct kvm_vcpu *, kvm_hyp_running_vcpu);
+
 #define read_sysreg_elx(r,nvh,vh)					\
 	({								\
 		u64 reg;						\
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7d32fc959b1a..cbe6a0def777 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -108,7 +108,6 @@ int main(void)
   DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
   DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
   DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
-  DEFINE(HOST_CONTEXT_VCPU,	offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
   DEFINE(HOST_DATA_CONTEXT,	offsetof(struct kvm_host_data, host_ctxt));
 #endif
 #ifdef CONFIG_CPU_PM
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 9e897c500237..dfe0f37567f3 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -71,6 +71,7 @@ KVM_NVHE_ALIAS(kvm_update_va_mask);
 /* Global kernel state accessed by nVHE hyp code. */
 KVM_NVHE_ALIAS(arm64_ssbd_callback_required);
 KVM_NVHE_ALIAS(kvm_host_data);
+KVM_NVHE_ALIAS(kvm_hyp_running_vcpu);
 KVM_NVHE_ALIAS(kvm_vgic_global_state);
 
 /* Kernel constant needed to compute idmap addresses. */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 98f05bdac3c1..94f4481e5637 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -47,6 +47,7 @@ __asm__(".arch_extension	virt");
 #endif
 
 DEFINE_PER_CPU(kvm_host_data_t, kvm_host_data);
+DEFINE_PER_CPU(struct kvm_vcpu *, kvm_hyp_running_vcpu);
 static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
 
 /* The VMID used in the VTTBR */
@@ -1537,6 +1538,7 @@ static int init_hyp_mode(void)
 
 	for_each_possible_cpu(cpu) {
 		kvm_host_data_t *cpu_data;
+		struct kvm_vcpu **running_vcpu;
 
 		cpu_data = per_cpu_ptr(&kvm_host_data, cpu);
 		err = create_hyp_mappings(cpu_data, cpu_data + 1, PAGE_HYP);
@@ -1545,6 +1547,14 @@ static int init_hyp_mode(void)
 			kvm_err("Cannot map host CPU state: %d\n", err);
 			goto out_err;
 		}
+
+		running_vcpu = per_cpu_ptr(&kvm_hyp_running_vcpu, cpu);
+		err = create_hyp_mappings(running_vcpu, running_vcpu + 1, PAGE_HYP);
+
+		if (err) {
+			kvm_err("Cannot map running vCPU: %d\n", err);
+			goto out_err;
+		}
 	}
 
 	err = hyp_map_aux_data();
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 689fccbc9de7..e727bee8e110 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -96,7 +96,7 @@ el1_hvc_guest:
 alternative_cb	arm64_enable_wa2_handling
 	b	wa2_end
 alternative_cb_end
-	get_vcpu_ptr	x2, x0
+	hyp_ldr_this_cpu x2, kvm_hyp_running_vcpu, x0
 	ldr	x0, [x2, #VCPU_WORKAROUND_FLAGS]
 
 	// Sanitize the argument and update the guest flags
@@ -128,17 +128,17 @@ wa_epilogue:
 	sb
 
 el1_trap:
-	get_vcpu_ptr	x1, x0
+	hyp_ldr_this_cpu x1, kvm_hyp_running_vcpu, x0
 	mov	x0, #ARM_EXCEPTION_TRAP
 	b	__guest_exit
 
 el1_irq:
-	get_vcpu_ptr	x1, x0
+	hyp_ldr_this_cpu x1, kvm_hyp_running_vcpu, x0
 	mov	x0, #ARM_EXCEPTION_IRQ
 	b	__guest_exit
 
 el1_error:
-	get_vcpu_ptr	x1, x0
+	hyp_ldr_this_cpu x1, kvm_hyp_running_vcpu, x0
 	mov	x0, #ARM_EXCEPTION_EL1_SERROR
 	b	__guest_exit
 
@@ -151,7 +151,7 @@ el2_sync:
 	b.eq	__hyp_panic
 
 	/* Let's attempt a recovery from the illegal exception return */
-	get_vcpu_ptr	x1, x0
+	hyp_ldr_this_cpu x1, kvm_hyp_running_vcpu, x0
 	mov	x0, #ARM_EXCEPTION_IL
 	b	__guest_exit
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index 7a986030145f..c55b2d17ada8 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -80,7 +80,7 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
 	    !cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
 		write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1),	SYS_SCTLR);
 		write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1),	SYS_TCR);
-	} else	if (!ctxt->__hyp_running_vcpu) {
+	} else	if (!ctxt->is_host) {
 		/*
 		 * Must only be done for guest registers, hence the context
 		 * test. We're coming from the host, so SCTLR.M is already
@@ -109,7 +109,7 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
 
 	if (!has_vhe() &&
 	    cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT) &&
-	    ctxt->__hyp_running_vcpu) {
+	    ctxt->is_host) {
 		/*
 		 * Must only be done for host registers, hence the context
 		 * test. Pairs with nVHE's __deactivate_traps().
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 341be2f2f312..99cf90200bf7 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -176,7 +176,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	vcpu = kern_hyp_va(vcpu);
 
 	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
-	host_ctxt->__hyp_running_vcpu = vcpu;
+	*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = vcpu;
 	guest_ctxt = &vcpu->arch.ctxt;
 
 	pmu_switch_needed = __pmu_switch_to_guest(host_ctxt);
@@ -247,7 +247,7 @@ void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt)
 	u64 spsr = read_sysreg_el2(SYS_SPSR);
 	u64 elr = read_sysreg_el2(SYS_ELR);
 	u64 par = read_sysreg(par_el1);
-	struct kvm_vcpu *vcpu = host_ctxt->__hyp_running_vcpu;
+	struct kvm_vcpu *vcpu = __hyp_this_cpu_read(kvm_hyp_running_vcpu);
 	unsigned long str_va;
 
 	if (read_sysreg(vttbr_el2)) {
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index c52d714e0d75..366dc386224c 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -109,7 +109,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	u64 exit_code;
 
 	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
-	host_ctxt->__hyp_running_vcpu = vcpu;
+	*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = vcpu;
 	guest_ctxt = &vcpu->arch.ctxt;
 
 	sysreg_save_host_state_vhe(host_ctxt);
@@ -195,8 +195,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 static void __hyp_call_panic(u64 spsr, u64 elr, u64 par,
 			     struct kvm_cpu_context *host_ctxt)
 {
-	struct kvm_vcpu *vcpu;
-	vcpu = host_ctxt->__hyp_running_vcpu;
+	struct kvm_vcpu *vcpu = __hyp_this_cpu_read(kvm_hyp_running_vcpu);
 
 	__deactivate_traps(vcpu);
 	sysreg_restore_host_state_vhe(host_ctxt);
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 04/37] KVM: arm64: nVHE: Pass pointers consistently to hyp-init
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (2 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 03/37] KVM: arm64: Track running vCPU outside of the CPU context Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 05/37] KVM: arm64: nVHE: Break out of the hyp-init idmap Andrew Scull
                   ` (32 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

Rather than some being kernel pointer and others being hyp pointers,
standardize on all pointers being hyp pointers.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/arm.c               | 1 +
 arch/arm64/kvm/hyp/nvhe/hyp-init.S | 1 -
 2 files changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 94f4481e5637..bf662c0dd409 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1275,6 +1275,7 @@ static void cpu_init_hyp_mode(void)
 
 	pgd_ptr = kvm_mmu_get_httbr();
 	hyp_stack_ptr = __this_cpu_read(kvm_arm_hyp_stack_page) + PAGE_SIZE;
+	hyp_stack_ptr = kern_hyp_va(hyp_stack_ptr);
 	vector_ptr = (unsigned long)kvm_get_hyp_vector();
 
 	/*
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
index 1587d146726a..f80f169a8442 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
@@ -113,7 +113,6 @@ alternative_else_nop_endif
 	isb
 
 	/* Set the stack and new vectors */
-	kern_hyp_va	x1
 	mov	sp, x1
 	msr	vbar_el2, x2
 
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 05/37] KVM: arm64: nVHE: Break out of the hyp-init idmap
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (3 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 04/37] KVM: arm64: nVHE: Pass pointers consistently to hyp-init Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 06/37] KVM: arm64: Only check pending interrupts if it would trap Andrew Scull
                   ` (31 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

The idmap is a restrictive context as it is located separately from the
rest of hyp. Extra care must be taken to avoid symbol references that
will not resolve correctly so, the less that has to be done in this
context, the better.

To overcome this, pass an pre-resolved pointer into hyp-init that it can
jump to and escape the idmap once it has installed the page table.

This patch just introduces the mechanism to break out of the idmap for
use in future changes.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/arm.c                |  7 ++++++-
 arch/arm64/kvm/hyp/nvhe/Makefile    |  3 ++-
 arch/arm64/kvm/hyp/nvhe/hyp-init.S  | 13 +++++++------
 arch/arm64/kvm/hyp/nvhe/hyp-start.S | 16 ++++++++++++++++
 4 files changed, 31 insertions(+), 8 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/nvhe/hyp-start.S

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index bf662c0dd409..52be6149fcbf 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1257,9 +1257,12 @@ long kvm_arch_vm_ioctl(struct file *filp,
 
 static void cpu_init_hyp_mode(void)
 {
+	DECLARE_KVM_NVHE_SYM(__kvm_hyp_start);
+
 	phys_addr_t pgd_ptr;
 	unsigned long hyp_stack_ptr;
 	unsigned long vector_ptr;
+	unsigned long start_hyp;
 	unsigned long tpidr_el2;
 
 	/* Switch from the HYP stub to our own HYP init vector */
@@ -1277,6 +1280,7 @@ static void cpu_init_hyp_mode(void)
 	hyp_stack_ptr = __this_cpu_read(kvm_arm_hyp_stack_page) + PAGE_SIZE;
 	hyp_stack_ptr = kern_hyp_va(hyp_stack_ptr);
 	vector_ptr = (unsigned long)kvm_get_hyp_vector();
+	start_hyp = (unsigned long)kern_hyp_va(kvm_ksym_ref_nvhe(__kvm_hyp_start));
 
 	/*
 	 * Call initialization code, and switch to the full blown HYP code.
@@ -1285,7 +1289,8 @@ static void cpu_init_hyp_mode(void)
 	 * cpus_have_const_cap() wrapper.
 	 */
 	BUG_ON(!system_capabilities_finalized());
-	__kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr, tpidr_el2);
+	__kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr, start_hyp,
+		       tpidr_el2);
 
 	/*
 	 * Disabling SSBD on a non-VHE system requires us to enable SSBS
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index 0b34414557d6..1f3a39efaa6e 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -6,7 +6,8 @@
 asflags-y := -D__KVM_NVHE_HYPERVISOR__
 ccflags-y := -D__KVM_NVHE_HYPERVISOR__
 
-obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o
+obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o \
+	 hyp-start.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 	 ../fpsimd.o ../hyp-entry.o
 
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
index f80f169a8442..029c51365d03 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
@@ -46,13 +46,17 @@ __invalid:
 	 * x0: HYP pgd
 	 * x1: HYP stack
 	 * x2: HYP vectors
-	 * x3: per-CPU offset
+	 * x3: __kvm_hyp_start HYP address
+	 * x4: per-CPU offset
 	 */
 __do_hyp_init:
 	/* Check for a stub HVC call */
 	cmp	x0, #HVC_STUB_HCALL_NR
 	b.lo	__kvm_handle_stub_hvc
 
+	/* Set tpidr_el2 for use by HYP */
+	msr	tpidr_el2, x4
+
 	phys_to_ttbr x4, x0
 alternative_if ARM64_HAS_CNP
 	orr	x4, x4, #TTBR_CNP_BIT
@@ -116,11 +120,8 @@ alternative_else_nop_endif
 	mov	sp, x1
 	msr	vbar_el2, x2
 
-	/* Set tpidr_el2 for use by HYP */
-	msr	tpidr_el2, x3
-
-	/* Hello, World! */
-	eret
+	/* Leave the idmap posthaste and head over to __kvm_hyp_start */
+	br	x3
 SYM_CODE_END(__kvm_hyp_init)
 
 SYM_CODE_START(__kvm_handle_stub_hvc)
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-start.S b/arch/arm64/kvm/hyp/nvhe/hyp-start.S
new file mode 100644
index 000000000000..5f7fbcb57fd5
--- /dev/null
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-start.S
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2020 - Google Inc
+ * Author: Andrew Scull <ascull@google.com>
+ */
+
+#include <linux/linkage.h>
+
+#include <asm/assembler.h>
+#include <asm/asm-offsets.h>
+#include <asm/kvm_asm.h>
+
+SYM_CODE_START(__kvm_hyp_start)
+	/* Hello, World! */
+	eret
+SYM_CODE_END(__kvm_hyp_start)
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 06/37] KVM: arm64: Only check pending interrupts if it would trap
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (4 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 05/37] KVM: arm64: nVHE: Break out of the hyp-init idmap Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-17 16:21   ` Marc Zyngier
  2020-07-15 18:44 ` [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism Andrew Scull
                   ` (30 subsequent siblings)
  36 siblings, 1 reply; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

Allow entry to a vcpu that can handle interrupts if there is an
interrupts pending. Entry will still be aborted if the vcpu cannot
handle interrupts.

This allows use of __guest_enter to enter into the host.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/entry.S | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index ee32a7743389..6a641fcff4f7 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -73,13 +73,17 @@ SYM_FUNC_START(__guest_enter)
 	save_sp_el0	x1, x2
 
 	// Now the host state is stored if we have a pending RAS SError it must
-	// affect the host. If any asynchronous exception is pending we defer
-	// the guest entry. The DSB isn't necessary before v8.2 as any SError
-	// would be fatal.
+	// affect the host. If physical IRQ interrupts are going to be trapped
+	// and there are already asynchronous exceptions pending then we defer
+	// the entry. The DSB isn't necessary before v8.2 as any SError would
+	// be fatal.
 alternative_if ARM64_HAS_RAS_EXTN
 	dsb	nshst
 	isb
 alternative_else_nop_endif
+	mrs	x1, hcr_el2
+	and	x1, x1, #HCR_IMO
+	cbz	x1, 1f
 	mrs	x1, isr_el1
 	cbz	x1,  1f
 	mov	x0, #ARM_EXCEPTION_IRQ
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (5 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 06/37] KVM: arm64: Only check pending interrupts if it would trap Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-18  9:00   ` Marc Zyngier
  2020-07-20 15:40   ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 08/37] KVM: arm64: nVHE: Introduce a hyp run loop for the host Andrew Scull
                   ` (29 subsequent siblings)
  36 siblings, 2 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

When exiting a guest, just check whether there is an SError pending and
set the bit in the exit code. The fixup then initiates the ceremony
should it be required.

The separation allows for easier choices to be made as to whether the
demonic consultation should proceed.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_hyp.h        |  2 ++
 arch/arm64/kvm/hyp/entry.S              | 27 +++++++++++++++++--------
 arch/arm64/kvm/hyp/hyp-entry.S          |  1 -
 arch/arm64/kvm/hyp/include/hyp/switch.h |  4 ++++
 4 files changed, 25 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 07745d9c49fc..50a774812761 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -91,6 +91,8 @@ void deactivate_traps_vhe_put(void);
 
 u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
 
+void __vaxorcize_serror(void);
+
 void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt);
 #ifdef __KVM_NVHE_HYPERVISOR__
 void __noreturn __hyp_do_panic(unsigned long, ...);
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 6a641fcff4f7..dc4e3e7e7407 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -174,18 +174,31 @@ alternative_if ARM64_HAS_RAS_EXTN
 	mrs_s	x2, SYS_DISR_EL1
 	str	x2, [x1, #(VCPU_FAULT_DISR - VCPU_CONTEXT)]
 	cbz	x2, 1f
-	msr_s	SYS_DISR_EL1, xzr
 	orr	x0, x0, #(1<<ARM_EXIT_WITH_SERROR_BIT)
-1:	ret
+	nop
+1:
 alternative_else
 	dsb	sy		// Synchronize against in-flight ld/st
 	isb			// Prevent an early read of side-effect free ISR
 	mrs	x2, isr_el1
-	tbnz	x2, #8, 2f	// ISR_EL1.A
-	ret
-	nop
+	tbz	x2, #8, 2f	// ISR_EL1.A
+	orr	x0, x0, #(1<<ARM_EXIT_WITH_SERROR_BIT)
 2:
 alternative_endif
+
+	ret
+SYM_FUNC_END(__guest_enter)
+
+/*
+ * void __vaxorcize_serror(void);
+ */
+SYM_FUNC_START(__vaxorcize_serror)
+
+alternative_if ARM64_HAS_RAS_EXTN
+	msr_s	SYS_DISR_EL1, xzr
+	ret
+alternative_else_nop_endif
+
 	// We know we have a pending asynchronous abort, now is the
 	// time to flush it out. From your VAXorcist book, page 666:
 	// "Threaten me not, oh Evil one!  For I speak with
@@ -193,7 +206,6 @@ alternative_endif
 	mrs	x2, elr_el2
 	mrs	x3, esr_el2
 	mrs	x4, spsr_el2
-	mov	x5, x0
 
 	msr	daifclr, #4	// Unmask aborts
 
@@ -217,6 +229,5 @@ abort_guest_exit_end:
 	msr	elr_el2, x2
 	msr	esr_el2, x3
 	msr	spsr_el2, x4
-	orr	x0, x0, x5
 1:	ret
-SYM_FUNC_END(__guest_enter)
+SYM_FUNC_END(__vaxorcize_serror)
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index e727bee8e110..c441aabb8ab0 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -177,7 +177,6 @@ el2_error:
 	adr	x1, abort_guest_exit_end
 	ccmp	x0, x1, #4, ne
 	b.ne	__hyp_panic
-	mov	x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
 	eret
 	sb
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 0511af14dc81..14a774d1a35a 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -405,6 +405,10 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
  */
 static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 {
+	/* Flush guest SErrors. */
+	if (ARM_SERROR_PENDING(*exit_code))
+		__vaxorcize_serror();
+
 	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
 		vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
 
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 08/37] KVM: arm64: nVHE: Introduce a hyp run loop for the host
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (6 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 09/37] smccc: Cast arguments to unsigned long Andrew Scull
                   ` (28 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

After installing the page tables and exception vector, the call to
__do_hyp_init no longer directly returns to the host with an eret but,
instead, begins to treat the host as a vCPU and repeatedly __guest_enters
into it.

As a result, hyp is endowed with its very own context for the general
purpose registers. However, at this point in time, the state is stored
in a confusing way:
   - hyp gp_regs and ptrauth are stored in the kvm_host_data context
   - host gp_regs and ptrauth are stored in kvm_host_vcpu
   - other host sysregs are store in the kvm_host_data context

This is the initial step in the migration but all the host registers
will need to be moved into kvm_host_vcpu for the migration to be
complete.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_host.h       |  5 ++
 arch/arm64/include/asm/kvm_hyp.h        |  3 +
 arch/arm64/kernel/image-vars.h          |  1 +
 arch/arm64/kvm/arm.c                    | 10 +++
 arch/arm64/kvm/hyp/entry.S              |  4 +-
 arch/arm64/kvm/hyp/hyp-entry.S          | 29 +-------
 arch/arm64/kvm/hyp/include/hyp/switch.h |  4 +-
 arch/arm64/kvm/hyp/nvhe/Makefile        |  2 +-
 arch/arm64/kvm/hyp/nvhe/hyp-main.c      | 90 +++++++++++++++++++++++++
 arch/arm64/kvm/hyp/nvhe/hyp-start.S     | 39 ++++++++++-
 10 files changed, 154 insertions(+), 33 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/nvhe/hyp-main.c

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 67a760d08b6e..183312340d2c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -413,6 +413,11 @@ struct kvm_vcpu_arch {
 #define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
 #define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
 
+#define KVM_ARM64_HOST_VCPU_FLAGS KVM_ARM64_DEBUG_DIRTY			\
+				| KVM_ARM64_GUEST_HAS_SVE		\
+				| KVM_ARM64_VCPU_SVE_FINALIZED		\
+				| KVM_ARM64_GUEST_HAS_PTRAUTH
+
 #define vcpu_has_sve(vcpu) (system_supports_sve() && \
 			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
 
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 50a774812761..d6915ab60e1f 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -13,6 +13,9 @@
 #include <asm/sysreg.h>
 
 DECLARE_PER_CPU(struct kvm_vcpu *, kvm_hyp_running_vcpu);
+#ifdef __KVM_NVHE_HYPERVISOR__
+DECLARE_PER_CPU(struct kvm_vcpu, kvm_host_vcpu);
+#endif
 
 #define read_sysreg_elx(r,nvh,vh)					\
 	({								\
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index dfe0f37567f3..5b93da2359d4 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -71,6 +71,7 @@ KVM_NVHE_ALIAS(kvm_update_va_mask);
 /* Global kernel state accessed by nVHE hyp code. */
 KVM_NVHE_ALIAS(arm64_ssbd_callback_required);
 KVM_NVHE_ALIAS(kvm_host_data);
+KVM_NVHE_ALIAS(kvm_host_vcpu);
 KVM_NVHE_ALIAS(kvm_hyp_running_vcpu);
 KVM_NVHE_ALIAS(kvm_vgic_global_state);
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 52be6149fcbf..8bd4630666ca 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -47,6 +47,7 @@ __asm__(".arch_extension	virt");
 #endif
 
 DEFINE_PER_CPU(kvm_host_data_t, kvm_host_data);
+DEFINE_PER_CPU(struct kvm_vcpu, kvm_host_vcpu);
 DEFINE_PER_CPU(struct kvm_vcpu *, kvm_hyp_running_vcpu);
 static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
 
@@ -1544,6 +1545,7 @@ static int init_hyp_mode(void)
 
 	for_each_possible_cpu(cpu) {
 		kvm_host_data_t *cpu_data;
+		struct kvm_vcpu *host_vcpu;
 		struct kvm_vcpu **running_vcpu;
 
 		cpu_data = per_cpu_ptr(&kvm_host_data, cpu);
@@ -1554,6 +1556,14 @@ static int init_hyp_mode(void)
 			goto out_err;
 		}
 
+		host_vcpu = per_cpu_ptr(&kvm_host_vcpu, cpu);
+		err = create_hyp_mappings(host_vcpu, host_vcpu + 1, PAGE_HYP);
+
+		if (err) {
+			kvm_err("Cannot map host vCPU: %d\n", err);
+			goto out_err;
+		}
+
 		running_vcpu = per_cpu_ptr(&kvm_hyp_running_vcpu, cpu);
 		err = create_hyp_mappings(running_vcpu, running_vcpu + 1, PAGE_HYP);
 
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index dc4e3e7e7407..da349c152791 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -72,8 +72,8 @@ SYM_FUNC_START(__guest_enter)
 	// Save the host's sp_el0
 	save_sp_el0	x1, x2
 
-	// Now the host state is stored if we have a pending RAS SError it must
-	// affect the host. If physical IRQ interrupts are going to be trapped
+	// Now the hyp state is stored if we have a pending RAS SError it must
+	// affect the hyp. If physical IRQ interrupts are going to be trapped
 	// and there are already asynchronous exceptions pending then we defer
 	// the entry. The DSB isn't necessary before v8.2 as any SError would
 	// be fatal.
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index c441aabb8ab0..a45459d1c135 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -17,20 +17,6 @@
 
 	.text
 
-.macro do_el2_call
-	/*
-	 * Shuffle the parameters before calling the function
-	 * pointed to in x0. Assumes parameters in x[1,2,3].
-	 */
-	str	lr, [sp, #-16]!
-	mov	lr, x0
-	mov	x0, x1
-	mov	x1, x2
-	mov	x2, x3
-	blr	lr
-	ldr	lr, [sp], #16
-.endm
-
 el1_sync:				// Guest trapped into EL2
 
 	mrs	x0, esr_el2
@@ -44,11 +30,12 @@ el1_sync:				// Guest trapped into EL2
 	cbnz	x1, el1_hvc_guest	// called HVC
 
 	/* Here, we're pretty sure the host called HVC. */
-	ldp	x0, x1, [sp], #16
+	ldp	x0, x1, [sp]
 
 	/* Check for a stub HVC call */
 	cmp	x0, #HVC_STUB_HCALL_NR
-	b.hs	1f
+	b.hs	el1_trap
+	add	sp, sp, #16
 
 	/*
 	 * Compute the idmap address of __kvm_handle_stub_hvc and
@@ -64,16 +51,6 @@ el1_sync:				// Guest trapped into EL2
 	/* x5 = __pa(x5) */
 	sub	x5, x5, x6
 	br	x5
-
-1:
-	/*
-	 * Perform the EL2 call
-	 */
-	kern_hyp_va	x0
-	do_el2_call
-
-	eret
-	sb
 #endif /* __KVM_NVHE_HYPERVISOR__ */
 
 el1_hvc_guest:
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 14a774d1a35a..248f434c5de6 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -405,8 +405,8 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
  */
 static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 {
-	/* Flush guest SErrors. */
-	if (ARM_SERROR_PENDING(*exit_code))
+	/* Flush guest SErrors but leave them pending for the host. */
+	if (ARM_SERROR_PENDING(*exit_code) && !vcpu->arch.ctxt.is_host)
 		__vaxorcize_serror();
 
 	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index 1f3a39efaa6e..d60cf9434895 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__
 ccflags-y := -D__KVM_NVHE_HYPERVISOR__
 
 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o \
-	 hyp-start.o
+	 hyp-start.o hyp-main.o
 obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
 	 ../fpsimd.o ../hyp-entry.o
 
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
new file mode 100644
index 000000000000..9b58d58d6cfa
--- /dev/null
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -0,0 +1,90 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 - Google Inc
+ * Author: Andrew Scull <ascull@google.com>
+ */
+
+#include <hyp/switch.h>
+
+#include <asm/kvm_asm.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_hyp.h>
+#include <asm/kvm_mmu.h>
+
+typedef unsigned long (*hypcall_fn_t)
+	(unsigned long, unsigned long, unsigned long);
+
+static void handle_trap(struct kvm_vcpu *host_vcpu) {
+	if (kvm_vcpu_trap_get_class(host_vcpu) == ESR_ELx_EC_HVC64) {
+		hypcall_fn_t func;
+		unsigned long ret;
+
+		/*
+		 * __kvm_call_hyp takes a pointer in the host address space and
+		 * up to three arguments.
+		 */
+		func = (hypcall_fn_t)kern_hyp_va(vcpu_get_reg(host_vcpu, 0));
+		ret = func(vcpu_get_reg(host_vcpu, 1),
+			   vcpu_get_reg(host_vcpu, 2),
+			   vcpu_get_reg(host_vcpu, 3));
+		vcpu_set_reg(host_vcpu, 0, ret);
+	}
+
+	/* Other traps are ignored. */
+}
+
+void __noreturn kvm_hyp_main(void)
+{
+	/* Set tpidr_el2 for use by HYP */
+	struct kvm_vcpu *host_vcpu;
+	struct kvm_cpu_context *hyp_ctxt;
+
+	host_vcpu = __hyp_this_cpu_ptr(kvm_host_vcpu);
+	hyp_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
+
+	kvm_init_host_cpu_context(&host_vcpu->arch.ctxt);
+
+	host_vcpu->arch.flags = KVM_ARM64_HOST_VCPU_FLAGS;
+	host_vcpu->arch.workaround_flags = VCPU_WORKAROUND_2_FLAG;
+
+	while (true) {
+		u64 exit_code;
+
+		/*
+		 * Set the running cpu for the vectors to pass to __guest_exit
+		 * so it can get the cpu context.
+		 */
+		*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = host_vcpu;
+
+		/*
+		 * Enter the host now that we feel like we're in charge.
+		 *
+		 * This should merge with __kvm_vcpu_run as host becomes more
+		 * vcpu-like.
+		 */
+		do {
+			exit_code = __guest_enter(host_vcpu, hyp_ctxt);
+		} while (fixup_guest_exit(host_vcpu, &exit_code));
+
+		switch (ARM_EXCEPTION_CODE(exit_code)) {
+		case ARM_EXCEPTION_TRAP:
+			handle_trap(host_vcpu);
+			break;
+		case ARM_EXCEPTION_IRQ:
+		case ARM_EXCEPTION_EL1_SERROR:
+		case ARM_EXCEPTION_IL:
+		default:
+			/*
+			 * These cases are not expected to be observed for the
+			 * host so, in the event that they are seen, take a
+			 * best-effort approach to keep things going.
+			 *
+			 * Ok, our expended effort comes to a grand total of
+			 * diddly squat but the internet protocol has gotten
+			 * away with the "best-effort" euphemism so we can too.
+			 */
+			break;
+		}
+
+	}
+}
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-start.S b/arch/arm64/kvm/hyp/nvhe/hyp-start.S
index 5f7fbcb57fd5..dd955e022963 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-start.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-start.S
@@ -6,11 +6,46 @@
 
 #include <linux/linkage.h>
 
+#include <asm/alternative.h>
 #include <asm/assembler.h>
 #include <asm/asm-offsets.h>
+#include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
+#include <asm/kvm_ptrauth.h>
+
+#define CPU_LR_OFFSET (CPU_USER_PT_REGS + (8 * 30))
+
+/*
+ * Initialize ptrauth in the hyp ctxt by populating it with the keys of the
+ * host, which are the keys currently installed.
+ */
+.macro ptrauth_hyp_ctxt_init hyp_ctxt, reg1, reg2, reg3
+#ifdef CONFIG_ARM64_PTR_AUTH
+alternative_if_not ARM64_HAS_ADDRESS_AUTH
+	b	.L__skip_switch\@
+alternative_else_nop_endif
+	add	\reg1, \hyp_ctxt, #CPU_APIAKEYLO_EL1
+	ptrauth_save_state	\reg1, \reg2, \reg3
+.L__skip_switch\@:
+#endif
+.endm
 
 SYM_CODE_START(__kvm_hyp_start)
-	/* Hello, World! */
-	eret
+	get_host_ctxt	x0, x1
+
+	ptrauth_hyp_ctxt_init x0, x1, x2, x3
+
+	/* Prepare a tail call from __guest_exit to kvm_hyp_main */
+	adr	x1, kvm_hyp_main
+	str	x1, [x0, #CPU_LR_OFFSET]
+
+	/*
+	 * The host's x0 and x1 are expected on the stack but they will be
+	 * clobbered so there's no need to load real values.
+	 */
+	sub	sp, sp, 16
+
+	hyp_adr_this_cpu x1, kvm_host_vcpu, x0
+	mov	x0, #ARM_EXCEPTION_TRAP
+	b	__guest_exit
 SYM_CODE_END(__kvm_hyp_start)
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 09/37] smccc: Cast arguments to unsigned long
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (7 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 08/37] KVM: arm64: nVHE: Introduce a hyp run loop for the host Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 10/37] KVM: arm64: nVHE: Migrate hyp interface to SMCCC Andrew Scull
                   ` (27 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

To avoid warning about implicit casting, make the casting explicit. This
allows, for example, pointers to be used as arguments.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 include/linux/arm-smccc.h | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index efcbde731f03..905d7744058b 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -255,7 +255,7 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
 	typeof(a1) __a1 = a1;						\
 	struct arm_smccc_res   *___res = res;				\
 	register unsigned long r0 asm("r0") = (u32)a0;			\
-	register unsigned long r1 asm("r1") = __a1;			\
+	register unsigned long r1 asm("r1") = (unsigned long)__a1;	\
 	register unsigned long r2 asm("r2");				\
 	register unsigned long r3 asm("r3")
 
@@ -264,8 +264,8 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
 	typeof(a2) __a2 = a2;						\
 	struct arm_smccc_res   *___res = res;				\
 	register unsigned long r0 asm("r0") = (u32)a0;			\
-	register unsigned long r1 asm("r1") = __a1;			\
-	register unsigned long r2 asm("r2") = __a2;			\
+	register unsigned long r1 asm("r1") = (unsigned long)__a1;	\
+	register unsigned long r2 asm("r2") = (unsigned long)__a2;	\
 	register unsigned long r3 asm("r3")
 
 #define __declare_arg_3(a0, a1, a2, a3, res)				\
@@ -274,29 +274,29 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
 	typeof(a3) __a3 = a3;						\
 	struct arm_smccc_res   *___res = res;				\
 	register unsigned long r0 asm("r0") = (u32)a0;			\
-	register unsigned long r1 asm("r1") = __a1;			\
-	register unsigned long r2 asm("r2") = __a2;			\
-	register unsigned long r3 asm("r3") = __a3
+	register unsigned long r1 asm("r1") = (unsigned long)__a1;	\
+	register unsigned long r2 asm("r2") = (unsigned long)__a2;	\
+	register unsigned long r3 asm("r3") = (unsigned long)__a3
 
 #define __declare_arg_4(a0, a1, a2, a3, a4, res)			\
 	typeof(a4) __a4 = a4;						\
 	__declare_arg_3(a0, a1, a2, a3, res);				\
-	register unsigned long r4 asm("r4") = __a4
+	register unsigned long r4 asm("r4") = (unsigned long)__a4
 
 #define __declare_arg_5(a0, a1, a2, a3, a4, a5, res)			\
 	typeof(a5) __a5 = a5;						\
 	__declare_arg_4(a0, a1, a2, a3, a4, res);			\
-	register unsigned long r5 asm("r5") = __a5
+	register unsigned long r5 asm("r5") = (unsigned long)__a5
 
 #define __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res)		\
 	typeof(a6) __a6 = a6;						\
 	__declare_arg_5(a0, a1, a2, a3, a4, a5, res);			\
-	register unsigned long r6 asm("r6") = __a6
+	register unsigned long r6 asm("r6") = (unsigned long)__a6
 
 #define __declare_arg_7(a0, a1, a2, a3, a4, a5, a6, a7, res)		\
 	typeof(a7) __a7 = a7;						\
 	__declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res);		\
-	register unsigned long r7 asm("r7") = __a7
+	register unsigned long r7 asm("r7") = (unsigned long)__a7
 
 #define ___declare_args(count, ...) __declare_arg_ ## count(__VA_ARGS__)
 #define __declare_args(count, ...)  ___declare_args(count, __VA_ARGS__)
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 10/37] KVM: arm64: nVHE: Migrate hyp interface to SMCCC
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (8 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 09/37] smccc: Cast arguments to unsigned long Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 11/37] KVM: arm64: nVHE: Migrate hyp-init " Andrew Scull
                   ` (26 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

Rather than passing arbitrary function pointers to run at hyp, define
and equivalent set of SMCCC functions.

Since the SMCCC functions are strongly tied to the original function
prototypes, it is not expected for the host to ever call an invalid ID
but a warning is raised if this does ever occur.

Signed-off-by: Andrew Scull <ascull@google.com>
Signed-off-by: David Brazdil <dbrazdil@google.com>
---
 arch/arm64/include/asm/kvm_asm.h   |  24 +++++++
 arch/arm64/include/asm/kvm_host.h  |  26 +++++---
 arch/arm64/kvm/arm.c               |   4 +-
 arch/arm64/kvm/hyp.S               |  25 ++-----
 arch/arm64/kvm/hyp/nvhe/hyp-main.c | 104 +++++++++++++++++++++++++----
 5 files changed, 139 insertions(+), 44 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index ebe9d582f360..ff27c18b9fd6 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -38,6 +38,30 @@
 
 #define __SMCCC_WORKAROUND_1_SMC_SZ 36
 
+#define KVM_HOST_SMCCC_ID(id)						\
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,				\
+			   ARM_SMCCC_SMC_64,				\
+			   ARM_SMCCC_OWNER_STANDARD_HYP,		\
+			   (id))
+
+#define KVM_HOST_SMCCC_FUNC(name) KVM_HOST_SMCCC_ID(__KVM_HOST_SMCCC_FUNC_##name)
+
+#define __KVM_HOST_SMCCC_FUNC___kvm_hyp_init			0
+#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		1
+#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		2
+#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		3
+#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_local_vmid	4
+#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff		5
+#define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			6
+#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs			7
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_ich_vtr_el2		8
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr		9
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_write_vmcr		10
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs		11
+#define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2		12
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs		13
+#define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs		14
+
 #ifndef __ASSEMBLY__
 
 #include <linux/mm.h>
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 183312340d2c..5603d2f465eb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -11,6 +11,7 @@
 #ifndef __ARM64_KVM_HOST_H__
 #define __ARM64_KVM_HOST_H__
 
+#include <linux/arm-smccc.h>
 #include <linux/bitmap.h>
 #include <linux/types.h>
 #include <linux/jump_label.h>
@@ -492,18 +493,21 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 void kvm_arm_halt_guest(struct kvm *kvm);
 void kvm_arm_resume_guest(struct kvm *kvm);
 
-u64 __kvm_call_hyp(void *hypfn, ...);
+u64 __kvm_call_hyp_init(phys_addr_t pgd_ptr,
+			unsigned long hyp_stack_ptr,
+			unsigned long vector_ptr,
+			unsigned long start_hyp,
+			unsigned long tpidr_el2);
 
-#define kvm_call_hyp_nvhe(f, ...)					\
-	do {								\
-		DECLARE_KVM_NVHE_SYM(f);				\
-		__kvm_call_hyp(kvm_ksym_ref_nvhe(f), ##__VA_ARGS__);	\
-	} while(0)
-
-#define kvm_call_hyp_nvhe_ret(f, ...)					\
+#define kvm_call_hyp_nvhe(f, ...)						\
 	({								\
-		DECLARE_KVM_NVHE_SYM(f);				\
-		__kvm_call_hyp(kvm_ksym_ref_nvhe(f), ##__VA_ARGS__);	\
+		struct arm_smccc_res res;				\
+									\
+		arm_smccc_1_1_hvc(KVM_HOST_SMCCC_FUNC(f),		\
+				  ##__VA_ARGS__, &res);			\
+		WARN_ON(res.a0 != SMCCC_RET_SUCCESS);			\
+									\
+		res.a1;							\
 	})
 
 /*
@@ -529,7 +533,7 @@ u64 __kvm_call_hyp(void *hypfn, ...);
 			ret = f(__VA_ARGS__);				\
 			isb();						\
 		} else {						\
-			ret = kvm_call_hyp_nvhe_ret(f, ##__VA_ARGS__);	\
+			ret = kvm_call_hyp_nvhe(f, ##__VA_ARGS__);	\
 		}							\
 									\
 		ret;							\
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 8bd4630666ca..c42c00c8141a 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1290,8 +1290,8 @@ static void cpu_init_hyp_mode(void)
 	 * cpus_have_const_cap() wrapper.
 	 */
 	BUG_ON(!system_capabilities_finalized());
-	__kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr, start_hyp,
-		       tpidr_el2);
+	__kvm_call_hyp_init(pgd_ptr, hyp_stack_ptr, vector_ptr, start_hyp,
+			    tpidr_el2);
 
 	/*
 	 * Disabling SSBD on a non-VHE system requires us to enable SSBS
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 3c79a1124af2..0891625c8648 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -11,24 +11,13 @@
 #include <asm/cpufeature.h>
 
 /*
- * u64 __kvm_call_hyp(void *hypfn, ...);
- *
- * This is not really a variadic function in the classic C-way and care must
- * be taken when calling this to ensure parameters are passed in registers
- * only, since the stack will change between the caller and the callee.
- *
- * Call the function with the first argument containing a pointer to the
- * function you wish to call in Hyp mode, and subsequent arguments will be
- * passed as x0, x1, and x2 (a maximum of 3 arguments in addition to the
- * function pointer can be passed).  The function being called must be mapped
- * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c).  Return values are
- * passed in x0.
- *
- * A function pointer with a value less than 0xfff has a special meaning,
- * and is used to implement hyp stubs in the same way as in
- * arch/arm64/kernel/hyp_stub.S.
+ * u64 __kvm_call_hyp_init(phys_addr_t pgd_ptr,
+ * 			   unsigned long hyp_stack_ptr,
+ * 			   unsigned long vector_ptr,
+ * 			   unsigned long start_hyp,
+ * 			   unsigned long tpidr_el2);
  */
-SYM_FUNC_START(__kvm_call_hyp)
+SYM_FUNC_START(__kvm_call_hyp_init)
 	hvc	#0
 	ret
-SYM_FUNC_END(__kvm_call_hyp)
+SYM_FUNC_END(__kvm_call_hyp_init)
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 9b58d58d6cfa..7e7c074f8093 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -11,23 +11,101 @@
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
 
-typedef unsigned long (*hypcall_fn_t)
-	(unsigned long, unsigned long, unsigned long);
+#include <kvm/arm_hypercalls.h>
+
+static void handle_host_hcall(unsigned long func_id, struct kvm_vcpu *host_vcpu)
+{
+	unsigned long ret = 0;
+
+	switch (func_id) {
+	case KVM_HOST_SMCCC_FUNC(__kvm_flush_vm_context):
+		__kvm_flush_vm_context();
+		break;
+	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid_ipa): {
+			struct kvm_s2_mmu *mmu =
+				(struct kvm_s2_mmu *)smccc_get_arg1(host_vcpu);
+			phys_addr_t ipa = smccc_get_arg2(host_vcpu);
+			int level = smccc_get_arg3(host_vcpu);
+
+			__kvm_tlb_flush_vmid_ipa(mmu, ipa, level);
+			break;
+		}
+	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid): {
+			struct kvm_s2_mmu *mmu =
+				(struct kvm_s2_mmu *)smccc_get_arg1(host_vcpu);
+
+			__kvm_tlb_flush_vmid(mmu);
+			break;
+		}
+	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_local_vmid): {
+			struct kvm_s2_mmu *mmu =
+				(struct kvm_s2_mmu *)smccc_get_arg1(host_vcpu);
+
+			__kvm_tlb_flush_local_vmid(mmu);
+			break;
+		}
+	case KVM_HOST_SMCCC_FUNC(__kvm_timer_set_cntvoff): {
+			u64 cntvoff = smccc_get_arg1(host_vcpu);
+
+			__kvm_timer_set_cntvoff(cntvoff);
+			break;
+		}
+	case KVM_HOST_SMCCC_FUNC(__kvm_vcpu_run): {
+			struct kvm_vcpu *vcpu =
+				(struct kvm_vcpu *)smccc_get_arg1(host_vcpu);
+
+			ret = __kvm_vcpu_run(vcpu);
+			break;
+		}
+	case KVM_HOST_SMCCC_FUNC(__kvm_enable_ssbs):
+		__kvm_enable_ssbs();
+		break;
+	case KVM_HOST_SMCCC_FUNC(__vgic_v3_get_ich_vtr_el2):
+		ret = __vgic_v3_get_ich_vtr_el2();
+		break;
+	case KVM_HOST_SMCCC_FUNC(__vgic_v3_read_vmcr):
+		ret = __vgic_v3_read_vmcr();
+		break;
+	case KVM_HOST_SMCCC_FUNC(__vgic_v3_write_vmcr): {
+			u32 vmcr = smccc_get_arg1(host_vcpu);
+
+			__vgic_v3_write_vmcr(vmcr);
+			break;
+		}
+	case KVM_HOST_SMCCC_FUNC(__vgic_v3_init_lrs):
+		__vgic_v3_init_lrs();
+		break;
+	case KVM_HOST_SMCCC_FUNC(__kvm_get_mdcr_el2):
+		ret = __kvm_get_mdcr_el2();
+		break;
+	case KVM_HOST_SMCCC_FUNC(__vgic_v3_save_aprs): {
+			struct vgic_v3_cpu_if *cpu_if =
+				(struct vgic_v3_cpu_if *)smccc_get_arg1(host_vcpu);
+
+			__vgic_v3_save_aprs(cpu_if);
+			break;
+		}
+	case KVM_HOST_SMCCC_FUNC(__vgic_v3_restore_aprs): {
+			struct vgic_v3_cpu_if *cpu_if =
+				(struct vgic_v3_cpu_if *)smccc_get_arg1(host_vcpu);
+
+			__vgic_v3_restore_aprs(cpu_if);
+			break;
+		}
+	default:
+		/* Invalid host HVC. */
+		smccc_set_retval(host_vcpu, SMCCC_RET_NOT_SUPPORTED, 0, 0, 0);
+		return;
+	}
+
+	smccc_set_retval(host_vcpu, SMCCC_RET_SUCCESS, ret, 0, 0);
+}
 
 static void handle_trap(struct kvm_vcpu *host_vcpu) {
 	if (kvm_vcpu_trap_get_class(host_vcpu) == ESR_ELx_EC_HVC64) {
-		hypcall_fn_t func;
-		unsigned long ret;
+		unsigned long func_id = smccc_get_function(host_vcpu);
 
-		/*
-		 * __kvm_call_hyp takes a pointer in the host address space and
-		 * up to three arguments.
-		 */
-		func = (hypcall_fn_t)kern_hyp_va(vcpu_get_reg(host_vcpu, 0));
-		ret = func(vcpu_get_reg(host_vcpu, 1),
-			   vcpu_get_reg(host_vcpu, 2),
-			   vcpu_get_reg(host_vcpu, 3));
-		vcpu_set_reg(host_vcpu, 0, ret);
+		handle_host_hcall(func_id, host_vcpu);
 	}
 
 	/* Other traps are ignored. */
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 11/37] KVM: arm64: nVHE: Migrate hyp-init to SMCCC
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (9 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 10/37] KVM: arm64: nVHE: Migrate hyp interface to SMCCC Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 12/37] KVM: arm64: nVHE: Fix pointers during SMCCC convertion Andrew Scull
                   ` (25 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

To complete the transition to SMCCC, the hyp initialization is given a
function ID. This looks neater than comparing the hyp stub function IDs
to the page table physical address.

Some care is taken to only clobber x0-3 before the host context is saved
as only those registers can be clobbered accoring to SMCCC. Fortunately,
only a few acrobatics are needed. The possible new tpidr_el2 is moved to
the argument in x2 so that it can be stashed in tpidr_el2 early to free
up a scratch register. The page table configuration then makes use of
x0-2.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_host.h  |  6 ---
 arch/arm64/kvm/Makefile            |  2 +-
 arch/arm64/kvm/arm.c               |  7 +++-
 arch/arm64/kvm/hyp.S               | 23 -----------
 arch/arm64/kvm/hyp/nvhe/hyp-init.S | 63 +++++++++++++++++-------------
 arch/arm64/kvm/hyp/nvhe/hyp-main.c |  6 +++
 6 files changed, 48 insertions(+), 59 deletions(-)
 delete mode 100644 arch/arm64/kvm/hyp.S

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 5603d2f465eb..152c050e74a9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -493,12 +493,6 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 void kvm_arm_halt_guest(struct kvm *kvm);
 void kvm_arm_resume_guest(struct kvm *kvm);
 
-u64 __kvm_call_hyp_init(phys_addr_t pgd_ptr,
-			unsigned long hyp_stack_ptr,
-			unsigned long vector_ptr,
-			unsigned long start_hyp,
-			unsigned long tpidr_el2);
-
 #define kvm_call_hyp_nvhe(f, ...)						\
 	({								\
 		struct arm_smccc_res res;				\
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 99977c1972cc..1504c81fbf5d 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -13,7 +13,7 @@ obj-$(CONFIG_KVM) += hyp/
 kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
 	 $(KVM)/vfio.o $(KVM)/irqchip.o \
 	 arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
-	 inject_fault.o regmap.o va_layout.o hyp.o handle_exit.o \
+	 inject_fault.o regmap.o va_layout.o handle_exit.o \
 	 guest.o debug.o reset.o sys_regs.o \
 	 vgic-sys-reg-v3.o fpsimd.o pmu.o \
 	 aarch32.o arch_timer.o \
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index c42c00c8141a..fe49203948d3 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1265,6 +1265,7 @@ static void cpu_init_hyp_mode(void)
 	unsigned long vector_ptr;
 	unsigned long start_hyp;
 	unsigned long tpidr_el2;
+	struct arm_smccc_res res;
 
 	/* Switch from the HYP stub to our own HYP init vector */
 	__hyp_set_vectors(kvm_get_idmap_vector());
@@ -1290,8 +1291,10 @@ static void cpu_init_hyp_mode(void)
 	 * cpus_have_const_cap() wrapper.
 	 */
 	BUG_ON(!system_capabilities_finalized());
-	__kvm_call_hyp_init(pgd_ptr, hyp_stack_ptr, vector_ptr, start_hyp,
-			    tpidr_el2);
+	arm_smccc_1_1_hvc(KVM_HOST_SMCCC_FUNC(__kvm_hyp_init),
+			  pgd_ptr, tpidr_el2, start_hyp, hyp_stack_ptr,
+			  vector_ptr, &res);
+	WARN_ON(res.a0 != SMCCC_RET_SUCCESS);
 
 	/*
 	 * Disabling SSBD on a non-VHE system requires us to enable SSBS
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
deleted file mode 100644
index 0891625c8648..000000000000
--- a/arch/arm64/kvm/hyp.S
+++ /dev/null
@@ -1,23 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (C) 2012,2013 - ARM Ltd
- * Author: Marc Zyngier <marc.zyngier@arm.com>
- */
-
-#include <linux/linkage.h>
-
-#include <asm/alternative.h>
-#include <asm/assembler.h>
-#include <asm/cpufeature.h>
-
-/*
- * u64 __kvm_call_hyp_init(phys_addr_t pgd_ptr,
- * 			   unsigned long hyp_stack_ptr,
- * 			   unsigned long vector_ptr,
- * 			   unsigned long start_hyp,
- * 			   unsigned long tpidr_el2);
- */
-SYM_FUNC_START(__kvm_call_hyp_init)
-	hvc	#0
-	ret
-SYM_FUNC_END(__kvm_call_hyp_init)
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
index 029c51365d03..df2a7904a83b 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
@@ -4,10 +4,12 @@
  * Author: Marc Zyngier <marc.zyngier@arm.com>
  */
 
+#include <linux/arm-smccc.h>
 #include <linux/linkage.h>
 
 #include <asm/assembler.h>
 #include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
 #include <asm/kvm_mmu.h>
 #include <asm/pgtable-hwdef.h>
 #include <asm/sysreg.h>
@@ -43,31 +45,38 @@ __invalid:
 	b	.
 
 	/*
-	 * x0: HYP pgd
-	 * x1: HYP stack
-	 * x2: HYP vectors
+	 * x0: SMCCC function ID
+	 * x1: HYP pgd
+	 * x2: per-CPU offset
 	 * x3: __kvm_hyp_start HYP address
-	 * x4: per-CPU offset
+	 * x4: HYP stack
+	 * x5: HYP vectors
 	 */
 __do_hyp_init:
 	/* Check for a stub HVC call */
 	cmp	x0, #HVC_STUB_HCALL_NR
 	b.lo	__kvm_handle_stub_hvc
 
-	/* Set tpidr_el2 for use by HYP */
-	msr	tpidr_el2, x4
+	/* Set tpidr_el2 for use by HYP to free a register */
+	msr	tpidr_el2, x2
 
-	phys_to_ttbr x4, x0
+	mov	x2, #KVM_HOST_SMCCC_FUNC(__kvm_hyp_init)
+	cmp	x0, x2
+	b.eq	1f
+	mov	x0, #SMCCC_RET_NOT_SUPPORTED
+	eret
+
+1:	phys_to_ttbr x0, x1
 alternative_if ARM64_HAS_CNP
-	orr	x4, x4, #TTBR_CNP_BIT
+	orr	x0, x0, #TTBR_CNP_BIT
 alternative_else_nop_endif
-	msr	ttbr0_el2, x4
+	msr	ttbr0_el2, x0
 
-	mrs	x4, tcr_el1
-	mov_q	x5, TCR_EL2_MASK
-	and	x4, x4, x5
-	mov	x5, #TCR_EL2_RES1
-	orr	x4, x4, x5
+	mrs	x0, tcr_el1
+	mov_q	x1, TCR_EL2_MASK
+	and	x0, x0, x1
+	mov	x1, #TCR_EL2_RES1
+	orr	x0, x0, x1
 
 	/*
 	 * The ID map may be configured to use an extended virtual address
@@ -83,18 +92,18 @@ alternative_else_nop_endif
 	 *
 	 * So use the same T0SZ value we use for the ID map.
 	 */
-	ldr_l	x5, idmap_t0sz
-	bfi	x4, x5, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
+	ldr_l	x1, idmap_t0sz
+	bfi	x0, x1, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
 
 	/*
 	 * Set the PS bits in TCR_EL2.
 	 */
-	tcr_compute_pa_size x4, #TCR_EL2_PS_SHIFT, x5, x6
+	tcr_compute_pa_size x0, #TCR_EL2_PS_SHIFT, x1, x2
 
-	msr	tcr_el2, x4
+	msr	tcr_el2, x0
 
-	mrs	x4, mair_el1
-	msr	mair_el2, x4
+	mrs	x0, mair_el1
+	msr	mair_el2, x0
 	isb
 
 	/* Invalidate the stale TLBs from Bootloader */
@@ -106,19 +115,19 @@ alternative_else_nop_endif
 	 * as well as the EE bit on BE. Drop the A flag since the compiler
 	 * is allowed to generate unaligned accesses.
 	 */
-	mov_q	x4, (SCTLR_EL2_RES1 | (SCTLR_ELx_FLAGS & ~SCTLR_ELx_A))
-CPU_BE(	orr	x4, x4, #SCTLR_ELx_EE)
+	mov_q	x0, (SCTLR_EL2_RES1 | (SCTLR_ELx_FLAGS & ~SCTLR_ELx_A))
+CPU_BE(	orr	x0, x0, #SCTLR_ELx_EE)
 alternative_if ARM64_HAS_ADDRESS_AUTH
-	mov_q	x5, (SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
+	mov_q	x1, (SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
 		     SCTLR_ELx_ENDA | SCTLR_ELx_ENDB)
-	orr	x4, x4, x5
+	orr	x0, x0, x1
 alternative_else_nop_endif
-	msr	sctlr_el2, x4
+	msr	sctlr_el2, x0
 	isb
 
 	/* Set the stack and new vectors */
-	mov	sp, x1
-	msr	vbar_el2, x2
+	mov	sp, x4
+	msr	vbar_el2, x5
 
 	/* Leave the idmap posthaste and head over to __kvm_hyp_start */
 	br	x3
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 7e7c074f8093..4e3634cdfde6 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -125,6 +125,12 @@ void __noreturn kvm_hyp_main(void)
 	host_vcpu->arch.flags = KVM_ARM64_HOST_VCPU_FLAGS;
 	host_vcpu->arch.workaround_flags = VCPU_WORKAROUND_2_FLAG;
 
+	/*
+	 * The first time entering the host is seen by the host as the return
+	 * of the initialization HVC so mark it as successful.
+	 */
+	smccc_set_retval(host_vcpu, SMCCC_RET_SUCCESS, 0, 0, 0);
+
 	while (true) {
 		u64 exit_code;
 
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 12/37] KVM: arm64: nVHE: Fix pointers during SMCCC convertion
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (10 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 11/37] KVM: arm64: nVHE: Migrate hyp-init " Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 13/37] KVM: arm64: Rename workaround 2 helpers Andrew Scull
                   ` (24 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

The host need not concern itself with the pointer differences for the
hyp interfaces that are shared between VHE and nVHE so leave it to the
hyp to handle.

As the SMCCC function IDs are converted into function calls, it is a
suitable place to also convert any pointer arguments into hyp pointers.
This, additionally, eases the reuse of the handlers in different
contexts.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/nvhe/hyp-main.c | 12 ++++++------
 arch/arm64/kvm/hyp/nvhe/switch.c   |  2 --
 arch/arm64/kvm/hyp/nvhe/tlb.c      |  2 --
 arch/arm64/kvm/vgic/vgic-v3.c      |  4 ++--
 4 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 4e3634cdfde6..2d621bf5ac3e 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -27,21 +27,21 @@ static void handle_host_hcall(unsigned long func_id, struct kvm_vcpu *host_vcpu)
 			phys_addr_t ipa = smccc_get_arg2(host_vcpu);
 			int level = smccc_get_arg3(host_vcpu);
 
-			__kvm_tlb_flush_vmid_ipa(mmu, ipa, level);
+			__kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level);
 			break;
 		}
 	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_vmid): {
 			struct kvm_s2_mmu *mmu =
 				(struct kvm_s2_mmu *)smccc_get_arg1(host_vcpu);
 
-			__kvm_tlb_flush_vmid(mmu);
+			__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
 			break;
 		}
 	case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_local_vmid): {
 			struct kvm_s2_mmu *mmu =
 				(struct kvm_s2_mmu *)smccc_get_arg1(host_vcpu);
 
-			__kvm_tlb_flush_local_vmid(mmu);
+			__kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
 			break;
 		}
 	case KVM_HOST_SMCCC_FUNC(__kvm_timer_set_cntvoff): {
@@ -54,7 +54,7 @@ static void handle_host_hcall(unsigned long func_id, struct kvm_vcpu *host_vcpu)
 			struct kvm_vcpu *vcpu =
 				(struct kvm_vcpu *)smccc_get_arg1(host_vcpu);
 
-			ret = __kvm_vcpu_run(vcpu);
+			ret = __kvm_vcpu_run(kern_hyp_va(vcpu));
 			break;
 		}
 	case KVM_HOST_SMCCC_FUNC(__kvm_enable_ssbs):
@@ -82,14 +82,14 @@ static void handle_host_hcall(unsigned long func_id, struct kvm_vcpu *host_vcpu)
 			struct vgic_v3_cpu_if *cpu_if =
 				(struct vgic_v3_cpu_if *)smccc_get_arg1(host_vcpu);
 
-			__vgic_v3_save_aprs(cpu_if);
+			__vgic_v3_save_aprs(kern_hyp_va(cpu_if));
 			break;
 		}
 	case KVM_HOST_SMCCC_FUNC(__vgic_v3_restore_aprs): {
 			struct vgic_v3_cpu_if *cpu_if =
 				(struct vgic_v3_cpu_if *)smccc_get_arg1(host_vcpu);
 
-			__vgic_v3_restore_aprs(cpu_if);
+			__vgic_v3_restore_aprs(kern_hyp_va(cpu_if));
 			break;
 		}
 	default:
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 99cf90200bf7..d866bff8a142 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -173,8 +173,6 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 		pmr_sync();
 	}
 
-	vcpu = kern_hyp_va(vcpu);
-
 	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
 	*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = vcpu;
 	guest_ctxt = &vcpu->arch.ctxt;
diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
index f31185272b50..e5f65f0da106 100644
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
@@ -56,7 +56,6 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
 	dsb(ishst);
 
 	/* Switch to requested VMID */
-	mmu = kern_hyp_va(mmu);
 	__tlb_switch_to_guest(mmu, &cxt);
 
 	/*
@@ -110,7 +109,6 @@ void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu)
 	dsb(ishst);
 
 	/* Switch to requested VMID */
-	mmu = kern_hyp_va(mmu);
 	__tlb_switch_to_guest(mmu, &cxt);
 
 	__tlbi(vmalls12e1is);
diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
index 76e2d85789ed..9cdf39a94a63 100644
--- a/arch/arm64/kvm/vgic/vgic-v3.c
+++ b/arch/arm64/kvm/vgic/vgic-v3.c
@@ -662,7 +662,7 @@ void vgic_v3_load(struct kvm_vcpu *vcpu)
 	if (likely(cpu_if->vgic_sre))
 		kvm_call_hyp(__vgic_v3_write_vmcr, cpu_if->vgic_vmcr);
 
-	kvm_call_hyp(__vgic_v3_restore_aprs, kern_hyp_va(cpu_if));
+	kvm_call_hyp(__vgic_v3_restore_aprs, cpu_if);
 
 	if (has_vhe())
 		__vgic_v3_activate_traps(cpu_if);
@@ -686,7 +686,7 @@ void vgic_v3_put(struct kvm_vcpu *vcpu)
 
 	vgic_v3_vmcr_sync(vcpu);
 
-	kvm_call_hyp(__vgic_v3_save_aprs, kern_hyp_va(cpu_if));
+	kvm_call_hyp(__vgic_v3_save_aprs, cpu_if);
 
 	if (has_vhe())
 		__vgic_v3_deactivate_traps(cpu_if);
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 13/37] KVM: arm64: Rename workaround 2 helpers
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (11 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 12/37] KVM: arm64: nVHE: Fix pointers during SMCCC convertion Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 14/37] KVM: arm64: nVHE: Use __kvm_vcpu_run for the host vcpu Andrew Scull
                   ` (23 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

Change the names from 'guest' and 'host' to terms that can also apply
when the host is treated as a vcpu and hyp has its own state.

'guest' becomes 'vcpu' as the host is now also a vcpu.

'host' becomes 'hyp' as the nVHE hyp is gaining indepedent state and the
VHE hyp _is_ the host anyway.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 12 ++++++------
 arch/arm64/kvm/hyp/nvhe/switch.c        |  4 ++--
 arch/arm64/kvm/hyp/vhe/switch.c         |  4 ++--
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 248f434c5de6..fa62c2b21b4b 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -484,15 +484,15 @@ static inline bool __needs_ssbd_off(struct kvm_vcpu *vcpu)
 	if (!cpus_have_final_cap(ARM64_SSBD))
 		return false;
 
-	return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG);
+	return !kvm_arm_get_vcpu_workaround_2_flag(vcpu);
 }
 
-static inline void __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu)
+static inline void __set_vcpu_arch_workaround_state(struct kvm_vcpu *vcpu)
 {
 #ifdef CONFIG_ARM64_SSBD
 	/*
-	 * The host runs with the workaround always present. If the
-	 * guest wants it disabled, so be it...
+	 * The hyp runs with the workaround always present. If the
+	 * vpu wants it disabled, so be it...
 	 */
 	if (__needs_ssbd_off(vcpu) &&
 	    __hyp_this_cpu_read(arm64_ssbd_callback_required))
@@ -500,11 +500,11 @@ static inline void __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu)
 #endif
 }
 
-static inline void __set_host_arch_workaround_state(struct kvm_vcpu *vcpu)
+static inline void __set_hyp_arch_workaround_state(struct kvm_vcpu *vcpu)
 {
 #ifdef CONFIG_ARM64_SSBD
 	/*
-	 * If the guest has disabled the workaround, bring it back on.
+	 * If the vcpu has disabled the workaround, bring it back on.
 	 */
 	if (__needs_ssbd_off(vcpu) &&
 	    __hyp_this_cpu_read(arm64_ssbd_callback_required))
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index d866bff8a142..81cdf33f92bc 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -200,7 +200,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	__debug_switch_to_guest(vcpu);
 
-	__set_guest_arch_workaround_state(vcpu);
+	__set_vcpu_arch_workaround_state(vcpu);
 
 	do {
 		/* Jump in the fire! */
@@ -209,7 +209,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 		/* And we're baaack! */
 	} while (fixup_guest_exit(vcpu, &exit_code));
 
-	__set_host_arch_workaround_state(vcpu);
+	__set_hyp_arch_workaround_state(vcpu);
 
 	__sysreg_save_state_nvhe(guest_ctxt);
 	__sysreg32_save_state(vcpu);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 366dc386224c..d871e79fceaf 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -131,7 +131,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	sysreg_restore_guest_state_vhe(guest_ctxt);
 	__debug_switch_to_guest(vcpu);
 
-	__set_guest_arch_workaround_state(vcpu);
+	__set_vcpu_arch_workaround_state(vcpu);
 
 	do {
 		/* Jump in the fire! */
@@ -140,7 +140,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 		/* And we're baaack! */
 	} while (fixup_guest_exit(vcpu, &exit_code));
 
-	__set_host_arch_workaround_state(vcpu);
+	__set_hyp_arch_workaround_state(vcpu);
 
 	sysreg_save_guest_state_vhe(guest_ctxt);
 
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 14/37] KVM: arm64: nVHE: Use __kvm_vcpu_run for the host vcpu
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (12 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 13/37] KVM: arm64: Rename workaround 2 helpers Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 15/37] KVM: arm64: Share some context save and restore macros Andrew Scull
                   ` (22 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

Keeping the assumption that transitions will only be from the host to a
guest and vice-versa, make __kvm_vcpu_run switch to the requested vcpu
and run it.

The context is only switched when it is not currently active allowing
the host to be run repeatedly when servicing HVCs.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/nvhe/hyp-main.c | 22 ++-----
 arch/arm64/kvm/hyp/nvhe/switch.c   | 95 ++++++++++++++++++++++--------
 2 files changed, 74 insertions(+), 43 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 2d621bf5ac3e..213977634601 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -115,10 +115,8 @@ void __noreturn kvm_hyp_main(void)
 {
 	/* Set tpidr_el2 for use by HYP */
 	struct kvm_vcpu *host_vcpu;
-	struct kvm_cpu_context *hyp_ctxt;
 
 	host_vcpu = __hyp_this_cpu_ptr(kvm_host_vcpu);
-	hyp_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
 
 	kvm_init_host_cpu_context(&host_vcpu->arch.ctxt);
 
@@ -131,24 +129,14 @@ void __noreturn kvm_hyp_main(void)
 	 */
 	smccc_set_retval(host_vcpu, SMCCC_RET_SUCCESS, 0, 0, 0);
 
+	/* The host is already loaded so note it as the running vcpu. */
+	*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = host_vcpu;
+
 	while (true) {
 		u64 exit_code;
 
-		/*
-		 * Set the running cpu for the vectors to pass to __guest_exit
-		 * so it can get the cpu context.
-		 */
-		*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = host_vcpu;
-
-		/*
-		 * Enter the host now that we feel like we're in charge.
-		 *
-		 * This should merge with __kvm_vcpu_run as host becomes more
-		 * vcpu-like.
-		 */
-		do {
-			exit_code = __guest_enter(host_vcpu, hyp_ctxt);
-		} while (fixup_guest_exit(host_vcpu, &exit_code));
+		/* Enter the host now that we feel like we're in charge. */
+		exit_code = __kvm_vcpu_run(host_vcpu);
 
 		switch (ARM_EXCEPTION_CODE(exit_code)) {
 		case ARM_EXCEPTION_TRAP:
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 81cdf33f92bc..36140686e1d8 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -119,7 +119,7 @@ static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu)
 /**
  * Disable host events, enable guest events
  */
-static bool __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt)
+static void __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt)
 {
 	struct kvm_host_data *host;
 	struct kvm_pmu_events *pmu;
@@ -132,8 +132,6 @@ static bool __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt)
 
 	if (pmu->events_guest)
 		write_sysreg(pmu->events_guest, pmcntenset_el0);
-
-	return (pmu->events_host || pmu->events_guest);
 }
 
 /**
@@ -154,13 +152,10 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 		write_sysreg(pmu->events_host, pmcntenset_el0);
 }
 
-/* Switch to the guest for legacy non-VHE systems */
-int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
+static void __kvm_vcpu_switch_to_guest(struct kvm_cpu_context *host_ctxt,
+				       struct kvm_vcpu *vcpu)
 {
-	struct kvm_cpu_context *host_ctxt;
-	struct kvm_cpu_context *guest_ctxt;
-	bool pmu_switch_needed;
-	u64 exit_code;
+	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
 
 	/*
 	 * Having IRQs masked via PMR when entering the guest means the GIC
@@ -173,11 +168,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 		pmr_sync();
 	}
 
-	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
-	*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = vcpu;
-	guest_ctxt = &vcpu->arch.ctxt;
-
-	pmu_switch_needed = __pmu_switch_to_guest(host_ctxt);
+	__pmu_switch_to_guest(host_ctxt);
 
 	__sysreg_save_state_nvhe(host_ctxt);
 
@@ -199,17 +190,13 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	__timer_enable_traps(vcpu);
 
 	__debug_switch_to_guest(vcpu);
+}
 
-	__set_vcpu_arch_workaround_state(vcpu);
-
-	do {
-		/* Jump in the fire! */
-		exit_code = __guest_enter(vcpu, host_ctxt);
-
-		/* And we're baaack! */
-	} while (fixup_guest_exit(vcpu, &exit_code));
-
-	__set_hyp_arch_workaround_state(vcpu);
+static void __kvm_vcpu_switch_to_host(struct kvm_cpu_context *host_ctxt,
+				      struct kvm_vcpu *host_vcpu,
+				      struct kvm_vcpu *vcpu)
+{
+	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
 
 	__sysreg_save_state_nvhe(guest_ctxt);
 	__sysreg32_save_state(vcpu);
@@ -230,12 +217,68 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	 */
 	__debug_switch_to_host(vcpu);
 
-	if (pmu_switch_needed)
-		__pmu_switch_to_host(host_ctxt);
+	__pmu_switch_to_host(host_ctxt);
 
 	/* Returning to host will clear PSR.I, remask PMR if needed */
 	if (system_uses_irq_prio_masking())
 		gic_write_pmr(GIC_PRIO_IRQOFF);
+}
+
+static void __vcpu_switch_to(struct kvm_vcpu *vcpu)
+{
+	struct kvm_cpu_context *host_ctxt;
+	struct kvm_vcpu *running_vcpu;
+
+	/*
+	 * Restoration is not yet pure so it still makes use of the previously
+	 * running vcpu.
+	 */
+	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
+	running_vcpu = __hyp_this_cpu_read(kvm_hyp_running_vcpu);
+
+	if (vcpu->arch.ctxt.is_host)
+		__kvm_vcpu_switch_to_host(host_ctxt, vcpu, running_vcpu);
+	else
+		__kvm_vcpu_switch_to_guest(host_ctxt, vcpu);
+
+	*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = vcpu;
+}
+
+/* Switch to the guest for legacy non-VHE systems */
+int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
+{
+	struct kvm_cpu_context *host_ctxt;
+	struct kvm_vcpu *running_vcpu;
+	u64 exit_code;
+
+	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
+	running_vcpu = __hyp_this_cpu_read(kvm_hyp_running_vcpu);
+
+	if (running_vcpu != vcpu) {
+		if (!running_vcpu->arch.ctxt.is_host &&
+		    !vcpu->arch.ctxt.is_host) {
+			/*
+			 * There are still assumptions that the switch will
+			 * always be between a guest and the host so double
+			 * check that is the case. If it isn't, pretending
+			 * there was an interrupt is a harmless way to bail.
+			 */
+			return ARM_EXCEPTION_IRQ;
+		}
+
+		__vcpu_switch_to(vcpu);
+	}
+
+	__set_vcpu_arch_workaround_state(vcpu);
+
+	do {
+		/* Jump in the fire! */
+		exit_code = __guest_enter(vcpu, host_ctxt);
+
+		/* And we're baaack! */
+	} while (fixup_guest_exit(vcpu, &exit_code));
+
+	__set_hyp_arch_workaround_state(vcpu);
 
 	return exit_code;
 }
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 15/37] KVM: arm64: Share some context save and restore macros
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (13 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 14/37] KVM: arm64: nVHE: Use __kvm_vcpu_run for the host vcpu Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 16/37] KVM: arm64: nVHE: Handle stub HVCs in the host loop Andrew Scull
                   ` (21 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

These macros should be used to avoid repeating the context save and
restore sequences.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_asm.h    | 39 +++++++++++++++++++++++++++++
 arch/arm64/kvm/hyp/entry.S          | 39 -----------------------------
 arch/arm64/kvm/hyp/nvhe/hyp-start.S |  2 --
 3 files changed, 39 insertions(+), 41 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index ff27c18b9fd6..1b2c718fa58f 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -212,6 +212,45 @@ extern char __smccc_workaround_1_smc[__SMCCC_WORKAROUND_1_SMC_SZ];
 	add	\reg, \reg, #HOST_DATA_CONTEXT
 .endm
 
+#define CPU_XREG_OFFSET(x)	(CPU_USER_PT_REGS + 8*x)
+#define CPU_LR_OFFSET		CPU_XREG_OFFSET(30)
+#define CPU_SP_EL0_OFFSET	(CPU_LR_OFFSET + 8)
+
+/*
+ * We treat x18 as callee-saved as the host may use it as a platform
+ * register (e.g. for shadow call stack).
+ */
+.macro save_callee_saved_regs ctxt
+	str	x18,      [\ctxt, #CPU_XREG_OFFSET(18)]
+	stp	x19, x20, [\ctxt, #CPU_XREG_OFFSET(19)]
+	stp	x21, x22, [\ctxt, #CPU_XREG_OFFSET(21)]
+	stp	x23, x24, [\ctxt, #CPU_XREG_OFFSET(23)]
+	stp	x25, x26, [\ctxt, #CPU_XREG_OFFSET(25)]
+	stp	x27, x28, [\ctxt, #CPU_XREG_OFFSET(27)]
+	stp	x29, lr,  [\ctxt, #CPU_XREG_OFFSET(29)]
+.endm
+
+.macro restore_callee_saved_regs ctxt
+	// We require \ctxt is not x18-x28
+	ldr	x18,      [\ctxt, #CPU_XREG_OFFSET(18)]
+	ldp	x19, x20, [\ctxt, #CPU_XREG_OFFSET(19)]
+	ldp	x21, x22, [\ctxt, #CPU_XREG_OFFSET(21)]
+	ldp	x23, x24, [\ctxt, #CPU_XREG_OFFSET(23)]
+	ldp	x25, x26, [\ctxt, #CPU_XREG_OFFSET(25)]
+	ldp	x27, x28, [\ctxt, #CPU_XREG_OFFSET(27)]
+	ldp	x29, lr,  [\ctxt, #CPU_XREG_OFFSET(29)]
+.endm
+
+.macro save_sp_el0 ctxt, tmp
+	mrs	\tmp,	sp_el0
+	str	\tmp,	[\ctxt, #CPU_SP_EL0_OFFSET]
+.endm
+
+.macro restore_sp_el0 ctxt, tmp
+	ldr	\tmp,	  [\ctxt, #CPU_SP_EL0_OFFSET]
+	msr	sp_el0, \tmp
+.endm
+
 #endif
 
 #endif /* __ARM_KVM_ASM_H__ */
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index da349c152791..63d484059c01 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -7,7 +7,6 @@
 #include <linux/linkage.h>
 
 #include <asm/alternative.h>
-#include <asm/asm-offsets.h>
 #include <asm/assembler.h>
 #include <asm/fpsimdmacros.h>
 #include <asm/kvm.h>
@@ -16,46 +15,8 @@
 #include <asm/kvm_mmu.h>
 #include <asm/kvm_ptrauth.h>
 
-#define CPU_XREG_OFFSET(x)	(CPU_USER_PT_REGS + 8*x)
-#define CPU_SP_EL0_OFFSET	(CPU_XREG_OFFSET(30) + 8)
-
 	.text
 
-/*
- * We treat x18 as callee-saved as the host may use it as a platform
- * register (e.g. for shadow call stack).
- */
-.macro save_callee_saved_regs ctxt
-	str	x18,      [\ctxt, #CPU_XREG_OFFSET(18)]
-	stp	x19, x20, [\ctxt, #CPU_XREG_OFFSET(19)]
-	stp	x21, x22, [\ctxt, #CPU_XREG_OFFSET(21)]
-	stp	x23, x24, [\ctxt, #CPU_XREG_OFFSET(23)]
-	stp	x25, x26, [\ctxt, #CPU_XREG_OFFSET(25)]
-	stp	x27, x28, [\ctxt, #CPU_XREG_OFFSET(27)]
-	stp	x29, lr,  [\ctxt, #CPU_XREG_OFFSET(29)]
-.endm
-
-.macro restore_callee_saved_regs ctxt
-	// We require \ctxt is not x18-x28
-	ldr	x18,      [\ctxt, #CPU_XREG_OFFSET(18)]
-	ldp	x19, x20, [\ctxt, #CPU_XREG_OFFSET(19)]
-	ldp	x21, x22, [\ctxt, #CPU_XREG_OFFSET(21)]
-	ldp	x23, x24, [\ctxt, #CPU_XREG_OFFSET(23)]
-	ldp	x25, x26, [\ctxt, #CPU_XREG_OFFSET(25)]
-	ldp	x27, x28, [\ctxt, #CPU_XREG_OFFSET(27)]
-	ldp	x29, lr,  [\ctxt, #CPU_XREG_OFFSET(29)]
-.endm
-
-.macro save_sp_el0 ctxt, tmp
-	mrs	\tmp,	sp_el0
-	str	\tmp,	[\ctxt, #CPU_SP_EL0_OFFSET]
-.endm
-
-.macro restore_sp_el0 ctxt, tmp
-	ldr	\tmp,	  [\ctxt, #CPU_SP_EL0_OFFSET]
-	msr	sp_el0, \tmp
-.endm
-
 /*
  * u64 __guest_enter(struct kvm_vcpu *vcpu,
  *		     struct kvm_cpu_context *host_ctxt);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-start.S b/arch/arm64/kvm/hyp/nvhe/hyp-start.S
index dd955e022963..d7744dcfd184 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-start.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-start.S
@@ -13,8 +13,6 @@
 #include <asm/kvm_asm.h>
 #include <asm/kvm_ptrauth.h>
 
-#define CPU_LR_OFFSET (CPU_USER_PT_REGS + (8 * 30))
-
 /*
  * Initialize ptrauth in the hyp ctxt by populating it with the keys of the
  * host, which are the keys currently installed.
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 16/37] KVM: arm64: nVHE: Handle stub HVCs in the host loop
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (14 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 15/37] KVM: arm64: Share some context save and restore macros Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 17/37] KVM: arm64: nVHE: Store host sysregs in host vcpu Andrew Scull
                   ` (20 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

Since the host is called from the hyp run loop, we can use that context
to identify calls from the host rather than checking VTTBR_EL2, which
will be used for the host's stage 2 in future.

Moving this to C also allows for more flexibiliy e.g. in applying
policies, such as forbidding HVC_RESET_VECTORS, based on the current
state of the hypervisor and removes the special casing for nVHE in the
exception handler.

Control over arch workaround 2 is made available to the host, the same
as any other vcpu.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/hyp-entry.S     | 36 ++-------------------
 arch/arm64/kvm/hyp/nvhe/hyp-init.S | 15 +++++++--
 arch/arm64/kvm/hyp/nvhe/hyp-main.c | 50 +++++++++++++++++++++++++++++-
 3 files changed, 65 insertions(+), 36 deletions(-)

diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index a45459d1c135..3113665ce912 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -17,50 +17,20 @@
 
 	.text
 
-el1_sync:				// Guest trapped into EL2
-
+el1_sync:
 	mrs	x0, esr_el2
 	lsr	x0, x0, #ESR_ELx_EC_SHIFT
 	cmp	x0, #ESR_ELx_EC_HVC64
 	ccmp	x0, #ESR_ELx_EC_HVC32, #4, ne
 	b.ne	el1_trap
 
-#ifdef __KVM_NVHE_HYPERVISOR__
-	mrs	x1, vttbr_el2		// If vttbr is valid, the guest
-	cbnz	x1, el1_hvc_guest	// called HVC
-
-	/* Here, we're pretty sure the host called HVC. */
-	ldp	x0, x1, [sp]
-
-	/* Check for a stub HVC call */
-	cmp	x0, #HVC_STUB_HCALL_NR
-	b.hs	el1_trap
-	add	sp, sp, #16
-
-	/*
-	 * Compute the idmap address of __kvm_handle_stub_hvc and
-	 * jump there. Since we use kimage_voffset, do not use the
-	 * HYP VA for __kvm_handle_stub_hvc, but the kernel VA instead
-	 * (by loading it from the constant pool).
-	 *
-	 * Preserve x0-x4, which may contain stub parameters.
-	 */
-	ldr	x5, =__kvm_handle_stub_hvc
-	ldr_l	x6, kimage_voffset
-
-	/* x5 = __pa(x5) */
-	sub	x5, x5, x6
-	br	x5
-#endif /* __KVM_NVHE_HYPERVISOR__ */
-
-el1_hvc_guest:
 	/*
 	 * Fastest possible path for ARM_SMCCC_ARCH_WORKAROUND_1.
 	 * The workaround has already been applied on the host,
 	 * so let's quickly get back to the guest. We don't bother
 	 * restoring x1, as it can be clobbered anyway.
 	 */
-	ldr	x1, [sp]				// Guest's x0
+	ldr	x1, [sp]				// vcpu's x0
 	eor	w1, w1, #ARM_SMCCC_ARCH_WORKAROUND_1
 	cbz	w1, wa_epilogue
 
@@ -77,7 +47,7 @@ alternative_cb_end
 	ldr	x0, [x2, #VCPU_WORKAROUND_FLAGS]
 
 	// Sanitize the argument and update the guest flags
-	ldr	x1, [sp, #8]			// Guest's x1
+	ldr	x1, [sp, #8]			// vcpu's x1
 	clz	w1, w1				// Murphy's device:
 	lsr	w1, w1, #5			// w1 = !!w1 without using
 	eor	w1, w1, #1			// the flags...
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
index df2a7904a83b..43e1ee6178d4 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
@@ -15,6 +15,9 @@
 #include <asm/sysreg.h>
 #include <asm/virt.h>
 
+#include <asm/kvm_asm.h>
+#include <asm/kvm_ptrauth.h>
+
 	.text
 	.pushsection	.hyp.idmap.text, "ax"
 
@@ -137,6 +140,7 @@ SYM_CODE_START(__kvm_handle_stub_hvc)
 	cmp	x0, #HVC_SOFT_RESTART
 	b.ne	1f
 
+SYM_INNER_LABEL(__kvm_handle_stub_hvc_soft_restart, SYM_L_GLOBAL)
 	/* This is where we're about to jump, staying at EL2 */
 	msr	elr_el2, x1
 	mov	x0, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT | PSR_MODE_EL2h)
@@ -146,11 +150,18 @@ SYM_CODE_START(__kvm_handle_stub_hvc)
 	mov	x0, x2
 	mov	x1, x3
 	mov	x2, x4
-	b	reset
+	b	2f
 
 1:	cmp	x0, #HVC_RESET_VECTORS
 	b.ne	1f
-reset:
+
+SYM_INNER_LABEL(__kvm_handle_stub_hvc_reset_vectors, SYM_L_GLOBAL)
+	/* Restore host's ptrauth, sp_el0 and callee saved regs */
+	ptrauth_switch_to_guest x5, x6, x7, x8
+	restore_sp_el0 x5, x6
+	restore_callee_saved_regs x5
+
+2:
 	/*
 	 * Reset kvm back to the hyp stub. Do not clobber x0-x4 in
 	 * case we coming via HVC_SOFT_RESTART.
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 213977634601..d013586e3a03 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -13,6 +13,51 @@
 
 #include <kvm/arm_hypercalls.h>
 
+typedef unsigned long (*stub_hvc_handler_t)
+	(unsigned long, unsigned long, unsigned long, unsigned long,
+	 unsigned long, struct kvm_cpu_context *);
+
+extern char __kvm_handle_stub_hvc_soft_restart[];
+extern char __kvm_handle_stub_hvc_reset_vectors[];
+
+static void handle_stub_hvc(unsigned long func_id, struct kvm_vcpu *host_vcpu)
+{
+	char *stub_hvc_handler_kern_va;
+	__noreturn stub_hvc_handler_t stub_hvc_handler;
+
+	/*
+	 * The handlers of the supported stub HVCs disable the MMU so they must
+	 * be called in the idmap. We compute the idmap address by subtracting
+	 * kimage_voffset from the kernel VA handler.
+	 */
+	switch (func_id) {
+	case HVC_SOFT_RESTART:
+		asm volatile("ldr %0, =%1"
+			     : "=r" (stub_hvc_handler_kern_va)
+			     : "S" (__kvm_handle_stub_hvc_soft_restart));
+		break;
+	case HVC_RESET_VECTORS:
+		asm volatile("ldr %0, =%1"
+			     : "=r" (stub_hvc_handler_kern_va)
+			     : "S" (__kvm_handle_stub_hvc_reset_vectors));
+		break;
+	default:
+		vcpu_set_reg(host_vcpu, 0, HVC_STUB_ERR);
+		return;
+	}
+
+	stub_hvc_handler = (__noreturn stub_hvc_handler_t)
+		(stub_hvc_handler_kern_va - kimage_voffset);
+
+	/* Preserve x0-x4, which may contain stub parameters. */
+	stub_hvc_handler(func_id,
+			 vcpu_get_reg(host_vcpu, 1),
+			 vcpu_get_reg(host_vcpu, 2),
+			 vcpu_get_reg(host_vcpu, 3),
+			 vcpu_get_reg(host_vcpu, 4),
+			 &host_vcpu->arch.ctxt);
+}
+
 static void handle_host_hcall(unsigned long func_id, struct kvm_vcpu *host_vcpu)
 {
 	unsigned long ret = 0;
@@ -105,7 +150,10 @@ static void handle_trap(struct kvm_vcpu *host_vcpu) {
 	if (kvm_vcpu_trap_get_class(host_vcpu) == ESR_ELx_EC_HVC64) {
 		unsigned long func_id = smccc_get_function(host_vcpu);
 
-		handle_host_hcall(func_id, host_vcpu);
+		if (func_id < HVC_STUB_HCALL_NR)
+			handle_stub_hvc(func_id, host_vcpu);
+		else
+			handle_host_hcall(func_id, host_vcpu);
 	}
 
 	/* Other traps are ignored. */
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 17/37] KVM: arm64: nVHE: Store host sysregs in host vcpu
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (15 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 16/37] KVM: arm64: nVHE: Handle stub HVCs in the host loop Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 18/37] KVM: arm64: nVHE: Access pmu_events directly in kvm_host_data Andrew Scull
                   ` (19 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

These registers were previously being stored in kvm_host_data, however,
there is a vcpu specifically for the host state waiting to be used.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 36140686e1d8..ac19f98bdb60 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -153,6 +153,7 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 }
 
 static void __kvm_vcpu_switch_to_guest(struct kvm_cpu_context *host_ctxt,
+				       struct kvm_vcpu *host_vcpu,
 				       struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
@@ -170,7 +171,7 @@ static void __kvm_vcpu_switch_to_guest(struct kvm_cpu_context *host_ctxt,
 
 	__pmu_switch_to_guest(host_ctxt);
 
-	__sysreg_save_state_nvhe(host_ctxt);
+	__sysreg_save_state_nvhe(&host_vcpu->arch.ctxt);
 
 	/*
 	 * We must restore the 32-bit state before the sysregs, thanks
@@ -206,7 +207,7 @@ static void __kvm_vcpu_switch_to_host(struct kvm_cpu_context *host_ctxt,
 	__deactivate_traps(vcpu);
 	__deactivate_vm(vcpu);
 
-	__sysreg_restore_state_nvhe(host_ctxt);
+	__sysreg_restore_state_nvhe(&host_vcpu->arch.ctxt);
 
 	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
 		__fpsimd_save_fpexc32(vcpu);
@@ -239,7 +240,7 @@ static void __vcpu_switch_to(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.ctxt.is_host)
 		__kvm_vcpu_switch_to_host(host_ctxt, vcpu, running_vcpu);
 	else
-		__kvm_vcpu_switch_to_guest(host_ctxt, vcpu);
+		__kvm_vcpu_switch_to_guest(host_ctxt, running_vcpu, vcpu);
 
 	*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = vcpu;
 }
@@ -288,14 +289,15 @@ void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt)
 	u64 spsr = read_sysreg_el2(SYS_SPSR);
 	u64 elr = read_sysreg_el2(SYS_ELR);
 	u64 par = read_sysreg(par_el1);
+	struct kvm_vcpu *host_vcpu = __hyp_this_cpu_ptr(kvm_host_vcpu);
 	struct kvm_vcpu *vcpu = __hyp_this_cpu_read(kvm_hyp_running_vcpu);
 	unsigned long str_va;
 
-	if (read_sysreg(vttbr_el2)) {
+	if (vcpu != host_vcpu) {
 		__timer_disable_traps(vcpu);
 		__deactivate_traps(vcpu);
 		__deactivate_vm(vcpu);
-		__sysreg_restore_state_nvhe(host_ctxt);
+		__sysreg_restore_state_nvhe(&host_vcpu->arch.ctxt);
 	}
 
 	/*
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 18/37] KVM: arm64: nVHE: Access pmu_events directly in kvm_host_data
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (16 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 17/37] KVM: arm64: nVHE: Store host sysregs in host vcpu Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 19/37] KVM: arm64: nVHE: Drop host_ctxt argument for context switching Andrew Scull
                   ` (18 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

The assumption that kvm_host_data is the container of the host
kvm_cpu_context will not be true when the migration into kvm_host_vcpu
has completed. Fortunately, a reference to kvm_host_data can easily be
gained as it is a percpu variable.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 20 ++++++--------------
 1 file changed, 6 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index ac19f98bdb60..8cd280a18dc2 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -119,13 +119,9 @@ static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu)
 /**
  * Disable host events, enable guest events
  */
-static void __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt)
+static void __pmu_switch_to_guest(void)
 {
-	struct kvm_host_data *host;
-	struct kvm_pmu_events *pmu;
-
-	host = container_of(host_ctxt, struct kvm_host_data, host_ctxt);
-	pmu = &host->pmu_events;
+	struct kvm_pmu_events *pmu = &__hyp_this_cpu_ptr(kvm_host_data)->pmu_events;
 
 	if (pmu->events_host)
 		write_sysreg(pmu->events_host, pmcntenclr_el0);
@@ -137,13 +133,9 @@ static void __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt)
 /**
  * Disable guest events, enable host events
  */
-static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
+static void __pmu_switch_to_host(void)
 {
-	struct kvm_host_data *host;
-	struct kvm_pmu_events *pmu;
-
-	host = container_of(host_ctxt, struct kvm_host_data, host_ctxt);
-	pmu = &host->pmu_events;
+	struct kvm_pmu_events *pmu = &__hyp_this_cpu_ptr(kvm_host_data)->pmu_events;
 
 	if (pmu->events_guest)
 		write_sysreg(pmu->events_guest, pmcntenclr_el0);
@@ -169,7 +161,7 @@ static void __kvm_vcpu_switch_to_guest(struct kvm_cpu_context *host_ctxt,
 		pmr_sync();
 	}
 
-	__pmu_switch_to_guest(host_ctxt);
+	__pmu_switch_to_guest();
 
 	__sysreg_save_state_nvhe(&host_vcpu->arch.ctxt);
 
@@ -218,7 +210,7 @@ static void __kvm_vcpu_switch_to_host(struct kvm_cpu_context *host_ctxt,
 	 */
 	__debug_switch_to_host(vcpu);
 
-	__pmu_switch_to_host(host_ctxt);
+	__pmu_switch_to_host();
 
 	/* Returning to host will clear PSR.I, remask PMR if needed */
 	if (system_uses_irq_prio_masking())
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 19/37] KVM: arm64: nVHE: Drop host_ctxt argument for context switching
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (17 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 18/37] KVM: arm64: nVHE: Access pmu_events directly in kvm_host_data Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 20/37] KVM: arm64: nVHE: Use host vcpu context for host debug state Andrew Scull
                   ` (17 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

The host's state is being stored in the host's vcpu context and not in
kvm_host_data->host_ctxt so the reference is no longer needed.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 8cd280a18dc2..ae830a9ef9d9 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -144,8 +144,7 @@ static void __pmu_switch_to_host(void)
 		write_sysreg(pmu->events_host, pmcntenset_el0);
 }
 
-static void __kvm_vcpu_switch_to_guest(struct kvm_cpu_context *host_ctxt,
-				       struct kvm_vcpu *host_vcpu,
+static void __kvm_vcpu_switch_to_guest(struct kvm_vcpu *host_vcpu,
 				       struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
@@ -185,8 +184,7 @@ static void __kvm_vcpu_switch_to_guest(struct kvm_cpu_context *host_ctxt,
 	__debug_switch_to_guest(vcpu);
 }
 
-static void __kvm_vcpu_switch_to_host(struct kvm_cpu_context *host_ctxt,
-				      struct kvm_vcpu *host_vcpu,
+static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 				      struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
@@ -219,20 +217,18 @@ static void __kvm_vcpu_switch_to_host(struct kvm_cpu_context *host_ctxt,
 
 static void __vcpu_switch_to(struct kvm_vcpu *vcpu)
 {
-	struct kvm_cpu_context *host_ctxt;
 	struct kvm_vcpu *running_vcpu;
 
 	/*
 	 * Restoration is not yet pure so it still makes use of the previously
 	 * running vcpu.
 	 */
-	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
 	running_vcpu = __hyp_this_cpu_read(kvm_hyp_running_vcpu);
 
 	if (vcpu->arch.ctxt.is_host)
-		__kvm_vcpu_switch_to_host(host_ctxt, vcpu, running_vcpu);
+		__kvm_vcpu_switch_to_host(vcpu, running_vcpu);
 	else
-		__kvm_vcpu_switch_to_guest(host_ctxt, running_vcpu, vcpu);
+		__kvm_vcpu_switch_to_guest(running_vcpu, vcpu);
 
 	*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = vcpu;
 }
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 20/37] KVM: arm64: nVHE: Use host vcpu context for host debug state
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (18 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 19/37] KVM: arm64: nVHE: Drop host_ctxt argument for context switching Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 21/37] KVM: arm64: Move host debug state from vcpu to percpu Andrew Scull
                   ` (16 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

Migrate the host's debug state from kvm_host_data's context and into the
host's vcpu context.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_hyp.h          |  5 +++++
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h | 16 ++++++----------
 arch/arm64/kvm/hyp/nvhe/debug-sr.c        | 20 ++++++++++++++++----
 arch/arm64/kvm/hyp/nvhe/hyp-main.c        |  1 +
 arch/arm64/kvm/hyp/nvhe/switch.c          |  4 ++--
 arch/arm64/kvm/hyp/vhe/debug-sr.c         | 16 ++++++++++++++--
 6 files changed, 44 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index d6915ab60e1f..aec61c9f6017 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -81,8 +81,13 @@ void sysreg_save_guest_state_vhe(struct kvm_cpu_context *ctxt);
 void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt);
 #endif
 
+#ifdef __KVM_NVHE_HYPERVISOR__
+void __debug_switch_to_guest(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu);
+void __debug_switch_to_host(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu);
+#else
 void __debug_switch_to_guest(struct kvm_vcpu *vcpu);
 void __debug_switch_to_host(struct kvm_vcpu *vcpu);
+#endif
 
 void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
 void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
index 50ca5d048017..0d342418d706 100644
--- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
@@ -125,38 +125,34 @@ static void __debug_restore_state(struct kvm_guest_debug_arch *dbg,
 	write_sysreg(ctxt_sys_reg(ctxt, MDCCINT_EL1), mdccint_el1);
 }
 
-static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu)
+static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu,
+						  struct kvm_guest_debug_arch *host_dbg,
+						  struct kvm_cpu_context *host_ctxt)
 {
-	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
-	struct kvm_guest_debug_arch *host_dbg;
 	struct kvm_guest_debug_arch *guest_dbg;
 
 	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
 		return;
 
-	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
 	guest_ctxt = &vcpu->arch.ctxt;
-	host_dbg = &vcpu->arch.host_debug_state.regs;
 	guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr);
 
 	__debug_save_state(host_dbg, host_ctxt);
 	__debug_restore_state(guest_dbg, guest_ctxt);
 }
 
-static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
+static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu,
+						 struct kvm_guest_debug_arch *host_dbg,
+						 struct kvm_cpu_context *host_ctxt)
 {
-	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
-	struct kvm_guest_debug_arch *host_dbg;
 	struct kvm_guest_debug_arch *guest_dbg;
 
 	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
 		return;
 
-	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
 	guest_ctxt = &vcpu->arch.ctxt;
-	host_dbg = &vcpu->arch.host_debug_state.regs;
 	guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr);
 
 	__debug_save_state(guest_dbg, guest_ctxt);
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index 91a711aa8382..a5fa40c060a8 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -58,17 +58,29 @@ static void __debug_restore_spe(u64 pmscr_el1)
 	write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1);
 }
 
-void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
+void __debug_switch_to_guest(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *host_ctxt;
+	struct kvm_guest_debug_arch *host_dbg;
+
+	host_ctxt = &host_vcpu->arch.ctxt;
+	host_dbg = host_vcpu->arch.debug_ptr;
+
 	/* Disable and flush SPE data generation */
 	__debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
-	__debug_switch_to_guest_common(vcpu);
+	__debug_switch_to_guest_common(vcpu, host_dbg, host_ctxt);
 }
 
-void __debug_switch_to_host(struct kvm_vcpu *vcpu)
+void __debug_switch_to_host(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu)
 {
+	struct kvm_cpu_context *host_ctxt;
+	struct kvm_guest_debug_arch *host_dbg;
+
+	host_ctxt = &host_vcpu->arch.ctxt;
+	host_dbg = host_vcpu->arch.debug_ptr;
+
 	__debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
-	__debug_switch_to_host_common(vcpu);
+	__debug_switch_to_host_common(vcpu, host_dbg, host_ctxt);
 }
 
 u32 __kvm_get_mdcr_el2(void)
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index d013586e3a03..e7601de3b901 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -170,6 +170,7 @@ void __noreturn kvm_hyp_main(void)
 
 	host_vcpu->arch.flags = KVM_ARM64_HOST_VCPU_FLAGS;
 	host_vcpu->arch.workaround_flags = VCPU_WORKAROUND_2_FLAG;
+	host_vcpu->arch.debug_ptr = &host_vcpu->arch.vcpu_debug_state;
 
 	/*
 	 * The first time entering the host is seen by the host as the return
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index ae830a9ef9d9..629fea722de1 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -181,7 +181,7 @@ static void __kvm_vcpu_switch_to_guest(struct kvm_vcpu *host_vcpu,
 	__hyp_vgic_restore_state(vcpu);
 	__timer_enable_traps(vcpu);
 
-	__debug_switch_to_guest(vcpu);
+	__debug_switch_to_guest(host_vcpu, vcpu);
 }
 
 static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
@@ -206,7 +206,7 @@ static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 	 * This must come after restoring the host sysregs, since a non-VHE
 	 * system may enable SPE here and make use of the TTBRs.
 	 */
-	__debug_switch_to_host(vcpu);
+	__debug_switch_to_host(host_vcpu, vcpu);
 
 	__pmu_switch_to_host();
 
diff --git a/arch/arm64/kvm/hyp/vhe/debug-sr.c b/arch/arm64/kvm/hyp/vhe/debug-sr.c
index f1e2e5a00933..6225c6cdfbca 100644
--- a/arch/arm64/kvm/hyp/vhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/debug-sr.c
@@ -12,12 +12,24 @@
 
 void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
 {
-	__debug_switch_to_guest_common(vcpu);
+	struct kvm_cpu_context *host_ctxt;
+	struct kvm_guest_debug_arch *host_dbg;
+
+	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
+	host_dbg = &vcpu->arch.host_debug_state.regs;
+
+	__debug_switch_to_guest_common(vcpu, host_dbg, host_ctxt);
 }
 
 void __debug_switch_to_host(struct kvm_vcpu *vcpu)
 {
-	__debug_switch_to_host_common(vcpu);
+	struct kvm_cpu_context *host_ctxt;
+	struct kvm_guest_debug_arch *host_dbg;
+
+	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
+	host_dbg = &vcpu->arch.host_debug_state.regs;
+
+	__debug_switch_to_host_common(vcpu, host_dbg, host_ctxt);
 }
 
 u32 __kvm_get_mdcr_el2(void)
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 21/37] KVM: arm64: Move host debug state from vcpu to percpu
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (19 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 20/37] KVM: arm64: nVHE: Use host vcpu context for host debug state Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 22/37] KVM: arm64: nVHE: Store host's mdcr_el2 and hcr_el2 in its vcpu Andrew Scull
                   ` (15 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

As there's only one host on each CPU, there's no need to pollute
struct kvm_vcpu with this field. Especially when there different state
is needed between VHE and nVHE.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_host.h  | 15 +++------------
 arch/arm64/include/asm/kvm_hyp.h   |  3 +++
 arch/arm64/kernel/image-vars.h     |  1 +
 arch/arm64/kvm/arm.c               | 11 +++++++++++
 arch/arm64/kvm/hyp/nvhe/debug-sr.c |  4 ++--
 arch/arm64/kvm/hyp/vhe/debug-sr.c  |  4 ++--
 6 files changed, 22 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 152c050e74a9..37e94a49b668 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -308,11 +308,9 @@ struct kvm_vcpu_arch {
 	 * We maintain more than a single set of debug registers to support
 	 * debugging the guest from the host and to maintain separate host and
 	 * guest state during world switches. vcpu_debug_state are the debug
-	 * registers of the vcpu as the guest sees them.  host_debug_state are
-	 * the host registers which are saved and restored during
-	 * world switches. external_debug_state contains the debug
-	 * values we want to debug the guest. This is set via the
-	 * KVM_SET_GUEST_DEBUG ioctl.
+	 * registers of the vcpu as the guest sees them. external_debug_state
+	 * contains the debug values we want to debug the guest. This is set
+	 * via the KVM_SET_GUEST_DEBUG ioctl.
 	 *
 	 * debug_ptr points to the set of debug registers that should be loaded
 	 * onto the hardware when running the guest.
@@ -324,13 +322,6 @@ struct kvm_vcpu_arch {
 	struct thread_info *host_thread_info;	/* hyp VA */
 	struct user_fpsimd_state *host_fpsimd_state;	/* hyp VA */
 
-	struct {
-		/* {Break,watch}point registers */
-		struct kvm_guest_debug_arch regs;
-		/* Statistical profiling extension */
-		u64 pmscr_el1;
-	} host_debug_state;
-
 	/* VGIC state */
 	struct vgic_cpu vgic_cpu;
 	struct arch_timer_cpu timer_cpu;
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index aec61c9f6017..75016c92d70b 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -15,6 +15,9 @@
 DECLARE_PER_CPU(struct kvm_vcpu *, kvm_hyp_running_vcpu);
 #ifdef __KVM_NVHE_HYPERVISOR__
 DECLARE_PER_CPU(struct kvm_vcpu, kvm_host_vcpu);
+DECLARE_PER_CPU(u64, kvm_host_pmscr_el1);
+#else
+DECLARE_PER_CPU(struct kvm_guest_debug_arch, kvm_host_debug_state);
 #endif
 
 #define read_sysreg_elx(r,nvh,vh)					\
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 5b93da2359d4..c75d74adfc8b 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -71,6 +71,7 @@ KVM_NVHE_ALIAS(kvm_update_va_mask);
 /* Global kernel state accessed by nVHE hyp code. */
 KVM_NVHE_ALIAS(arm64_ssbd_callback_required);
 KVM_NVHE_ALIAS(kvm_host_data);
+KVM_NVHE_ALIAS(kvm_host_pmscr_el1);
 KVM_NVHE_ALIAS(kvm_host_vcpu);
 KVM_NVHE_ALIAS(kvm_hyp_running_vcpu);
 KVM_NVHE_ALIAS(kvm_vgic_global_state);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index fe49203948d3..cb8ac29195be 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -49,6 +49,8 @@ __asm__(".arch_extension	virt");
 DEFINE_PER_CPU(kvm_host_data_t, kvm_host_data);
 DEFINE_PER_CPU(struct kvm_vcpu, kvm_host_vcpu);
 DEFINE_PER_CPU(struct kvm_vcpu *, kvm_hyp_running_vcpu);
+DEFINE_PER_CPU(struct kvm_guest_debug_arch, kvm_host_debug_state);
+DEFINE_PER_CPU(u64, kvm_host_pmscr_el1);
 static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
 
 /* The VMID used in the VTTBR */
@@ -1549,6 +1551,7 @@ static int init_hyp_mode(void)
 	for_each_possible_cpu(cpu) {
 		kvm_host_data_t *cpu_data;
 		struct kvm_vcpu *host_vcpu;
+		u64 *host_pmscr;
 		struct kvm_vcpu **running_vcpu;
 
 		cpu_data = per_cpu_ptr(&kvm_host_data, cpu);
@@ -1567,6 +1570,14 @@ static int init_hyp_mode(void)
 			goto out_err;
 		}
 
+		host_pmscr = per_cpu_ptr(&kvm_host_pmscr_el1, cpu);
+		err = create_hyp_mappings(host_pmscr, host_pmscr + 1, PAGE_HYP);
+
+		if (err) {
+			kvm_err("Cannot map host pmscr_el1: %d\n", err);
+			goto out_err;
+		}
+
 		running_vcpu = per_cpu_ptr(&kvm_hyp_running_vcpu, cpu);
 		err = create_hyp_mappings(running_vcpu, running_vcpu + 1, PAGE_HYP);
 
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index a5fa40c060a8..4bf0eeb41a44 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -67,7 +67,7 @@ void __debug_switch_to_guest(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu)
 	host_dbg = host_vcpu->arch.debug_ptr;
 
 	/* Disable and flush SPE data generation */
-	__debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
+	__debug_save_spe(__hyp_this_cpu_ptr(kvm_host_pmscr_el1));
 	__debug_switch_to_guest_common(vcpu, host_dbg, host_ctxt);
 }
 
@@ -79,7 +79,7 @@ void __debug_switch_to_host(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu)
 	host_ctxt = &host_vcpu->arch.ctxt;
 	host_dbg = host_vcpu->arch.debug_ptr;
 
-	__debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
+	__debug_restore_spe(__hyp_this_cpu_read(kvm_host_pmscr_el1));
 	__debug_switch_to_host_common(vcpu, host_dbg, host_ctxt);
 }
 
diff --git a/arch/arm64/kvm/hyp/vhe/debug-sr.c b/arch/arm64/kvm/hyp/vhe/debug-sr.c
index 6225c6cdfbca..a564831726e7 100644
--- a/arch/arm64/kvm/hyp/vhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/debug-sr.c
@@ -16,7 +16,7 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
 	struct kvm_guest_debug_arch *host_dbg;
 
 	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
-	host_dbg = &vcpu->arch.host_debug_state.regs;
+	host_dbg = __hyp_this_cpu_ptr(kvm_host_debug_state);
 
 	__debug_switch_to_guest_common(vcpu, host_dbg, host_ctxt);
 }
@@ -27,7 +27,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu)
 	struct kvm_guest_debug_arch *host_dbg;
 
 	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
-	host_dbg = &vcpu->arch.host_debug_state.regs;
+	host_dbg = __hyp_this_cpu_ptr(kvm_host_debug_state);
 
 	__debug_switch_to_host_common(vcpu, host_dbg, host_ctxt);
 }
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 22/37] KVM: arm64: nVHE: Store host's mdcr_el2 and hcr_el2 in its vcpu
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (20 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 21/37] KVM: arm64: Move host debug state from vcpu to percpu Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 23/37] KVM: arm64: Skip __hyp_panic and go direct to hyp_panic Andrew Scull
                   ` (14 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

This make the host vcpu's state more accurate and lays the foundations
for the configurations being dynamic.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/nvhe/hyp-main.c |  7 +++++++
 arch/arm64/kvm/hyp/nvhe/switch.c   | 17 +++++------------
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index e7601de3b901..3a78fe8b923a 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -163,13 +163,20 @@ void __noreturn kvm_hyp_main(void)
 {
 	/* Set tpidr_el2 for use by HYP */
 	struct kvm_vcpu *host_vcpu;
+	u64 mdcr_el2;
 
 	host_vcpu = __hyp_this_cpu_ptr(kvm_host_vcpu);
 
 	kvm_init_host_cpu_context(&host_vcpu->arch.ctxt);
 
+	mdcr_el2 = read_sysreg(mdcr_el2);
+	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
+	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
+
 	host_vcpu->arch.flags = KVM_ARM64_HOST_VCPU_FLAGS;
 	host_vcpu->arch.workaround_flags = VCPU_WORKAROUND_2_FLAG;
+	host_vcpu->arch.hcr_el2 = HCR_HOST_NVHE_FLAGS;
+	host_vcpu->arch.mdcr_el2 = mdcr_el2;
 	host_vcpu->arch.debug_ptr = &host_vcpu->arch.vcpu_debug_state;
 
 	/*
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 629fea722de1..88e7946a35d5 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -58,14 +58,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 	}
 }
 
-static void __deactivate_traps(struct kvm_vcpu *vcpu)
+static void __deactivate_traps(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu)
 {
-	u64 mdcr_el2;
-
 	___deactivate_traps(vcpu);
 
-	mdcr_el2 = read_sysreg(mdcr_el2);
-
 	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
 		u64 val;
 
@@ -85,11 +81,8 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 
 	__deactivate_traps_common();
 
-	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
-	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
-
-	write_sysreg(mdcr_el2, mdcr_el2);
-	write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
+	write_sysreg(host_vcpu->arch.mdcr_el2, mdcr_el2);
+	write_sysreg(host_vcpu->arch.hcr_el2, hcr_el2);
 	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
 
@@ -194,7 +187,7 @@ static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 	__timer_disable_traps(vcpu);
 	__hyp_vgic_save_state(vcpu);
 
-	__deactivate_traps(vcpu);
+	__deactivate_traps(host_vcpu, vcpu);
 	__deactivate_vm(vcpu);
 
 	__sysreg_restore_state_nvhe(&host_vcpu->arch.ctxt);
@@ -283,7 +276,7 @@ void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt)
 
 	if (vcpu != host_vcpu) {
 		__timer_disable_traps(vcpu);
-		__deactivate_traps(vcpu);
+		__deactivate_traps(host_vcpu, vcpu);
 		__deactivate_vm(vcpu);
 		__sysreg_restore_state_nvhe(&host_vcpu->arch.ctxt);
 	}
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 23/37] KVM: arm64: Skip __hyp_panic and go direct to hyp_panic
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (21 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 22/37] KVM: arm64: nVHE: Store host's mdcr_el2 and hcr_el2 in its vcpu Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 24/37] KVM: arm64: Break apart kvm_host_data Andrew Scull
                   ` (13 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

The VHE and nVHE variants of hyp_panic no longer share the same
arguments so __hyp_panic no longer serves a useful purpose. hyp_panic
can get all the context it needs from percpu variables.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_hyp.h |  2 +-
 arch/arm64/kvm/hyp/hyp-entry.S   | 11 +++--------
 arch/arm64/kvm/hyp/nvhe/switch.c |  2 +-
 arch/arm64/kvm/hyp/vhe/switch.c  |  3 ++-
 4 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 75016c92d70b..32b438c1da4e 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -104,7 +104,7 @@ u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
 
 void __vaxorcize_serror(void);
 
-void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt);
+void __noreturn hyp_panic(void);
 #ifdef __KVM_NVHE_HYPERVISOR__
 void __noreturn __hyp_do_panic(unsigned long, ...);
 #endif
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 3113665ce912..2748e3d3ef85 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -95,7 +95,7 @@ el2_sync:
 
 	/* if this was something else, then panic! */
 	tst	x0, #PSR_IL_BIT
-	b.eq	__hyp_panic
+	b.eq	hyp_panic
 
 	/* Let's attempt a recovery from the illegal exception return */
 	hyp_ldr_this_cpu x1, kvm_hyp_running_vcpu, x0
@@ -123,7 +123,7 @@ el2_error:
 	cmp	x0, x1
 	adr	x1, abort_guest_exit_end
 	ccmp	x0, x1, #4, ne
-	b.ne	__hyp_panic
+	b.ne	hyp_panic
 	eret
 	sb
 
@@ -139,12 +139,7 @@ SYM_FUNC_START(__hyp_do_panic)
 SYM_FUNC_END(__hyp_do_panic)
 #endif
 
-SYM_CODE_START(__hyp_panic)
-	get_host_ctxt x0, x1
-	b	hyp_panic
-SYM_CODE_END(__hyp_panic)
-
-.macro invalid_vector	label, target = __hyp_panic
+.macro invalid_vector	label, target = hyp_panic
 	.align	2
 SYM_CODE_START(\label)
 	b \target
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 88e7946a35d5..e49967c915de 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -265,7 +265,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	return exit_code;
 }
 
-void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt)
+void __noreturn hyp_panic(void)
 {
 	u64 spsr = read_sysreg_el2(SYS_SPSR);
 	u64 elr = read_sysreg_el2(SYS_ELR);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index d871e79fceaf..5c2eaf889cf5 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -207,8 +207,9 @@ static void __hyp_call_panic(u64 spsr, u64 elr, u64 par,
 }
 NOKPROBE_SYMBOL(__hyp_call_panic);
 
-void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt)
+void __noreturn hyp_panic(void)
 {
+	struct kvm_cpu_context *host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
 	u64 spsr = read_sysreg_el2(SYS_SPSR);
 	u64 elr = read_sysreg_el2(SYS_ELR);
 	u64 par = read_sysreg(par_el1);
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 24/37] KVM: arm64: Break apart kvm_host_data
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (22 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 23/37] KVM: arm64: Skip __hyp_panic and go direct to hyp_panic Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 25/37] KVM: arm64: nVHE: Unify sysreg state saving paths Andrew Scull
                   ` (12 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

The host_ctxt field is used for the host's state with VHE but it's used
for the hyp state with nVHE. Using separate variables for these
different contexts makes the difference clear and will allow for any
divergences.

But then kvm_host_data only contains pmu_events so it seems reasonable
to make that standalone as well.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_asm.h        |  9 ++++---
 arch/arm64/include/asm/kvm_host.h       |  9 +------
 arch/arm64/include/asm/kvm_hyp.h        | 13 +++++++++-
 arch/arm64/kernel/asm-offsets.c         |  1 -
 arch/arm64/kernel/image-vars.h          |  3 ++-
 arch/arm64/kvm/arm.c                    | 33 ++++++++++++++++---------
 arch/arm64/kvm/hyp/entry.S              | 14 +++++------
 arch/arm64/kvm/hyp/include/hyp/switch.h |  4 ++-
 arch/arm64/kvm/hyp/nvhe/hyp-start.S     |  2 +-
 arch/arm64/kvm/hyp/nvhe/switch.c        | 10 ++++----
 arch/arm64/kvm/hyp/vhe/debug-sr.c       |  4 +--
 arch/arm64/kvm/hyp/vhe/switch.c         |  4 +--
 arch/arm64/kvm/hyp/vhe/sysreg-sr.c      |  4 +--
 arch/arm64/kvm/pmu.c                    | 28 ++++++++++-----------
 14 files changed, 79 insertions(+), 59 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 1b2c718fa58f..ec71d3bcf5f5 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -207,9 +207,12 @@ extern char __smccc_workaround_1_smc[__SMCCC_WORKAROUND_1_SMC_SZ];
 	ldr	\reg,  [\reg, \tmp]
 .endm
 
-.macro get_host_ctxt reg, tmp
-	hyp_adr_this_cpu \reg, kvm_host_data, \tmp
-	add	\reg, \reg, #HOST_DATA_CONTEXT
+.macro get_hyp_ctxt reg, tmp
+#ifdef __KVM_NVHE_HYPERVISOR__
+	hyp_adr_this_cpu \reg, kvm_hyp_ctxt, \tmp
+#else
+	hyp_adr_this_cpu \reg, kvm_host_ctxt, \tmp
+#endif
 .endm
 
 #define CPU_XREG_OFFSET(x)	(CPU_USER_PT_REGS + 8*x)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 37e94a49b668..c8304f82ccb3 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -269,13 +269,6 @@ struct kvm_pmu_events {
 	u32 events_guest;
 };
 
-struct kvm_host_data {
-	struct kvm_cpu_context host_ctxt;
-	struct kvm_pmu_events pmu_events;
-};
-
-typedef struct kvm_host_data kvm_host_data_t;
-
 struct vcpu_reset_state {
 	unsigned long	pc;
 	unsigned long	r0;
@@ -568,7 +561,7 @@ void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome);
 
 struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
 
-DECLARE_PER_CPU(kvm_host_data_t, kvm_host_data);
+DECLARE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events);
 
 static inline void kvm_init_host_cpu_context(struct kvm_cpu_context *cpu_ctxt)
 {
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 32b438c1da4e..38d49f9b56c7 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -14,12 +14,23 @@
 
 DECLARE_PER_CPU(struct kvm_vcpu *, kvm_hyp_running_vcpu);
 #ifdef __KVM_NVHE_HYPERVISOR__
+DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
 DECLARE_PER_CPU(struct kvm_vcpu, kvm_host_vcpu);
 DECLARE_PER_CPU(u64, kvm_host_pmscr_el1);
 #else
+DECLARE_PER_CPU(struct kvm_cpu_context, kvm_host_ctxt);
 DECLARE_PER_CPU(struct kvm_guest_debug_arch, kvm_host_debug_state);
 #endif
 
+static inline struct kvm_cpu_context *get_hyp_ctxt(void)
+{
+#ifdef __KVM_NVHE_HYPERVISOR__
+	return __hyp_this_cpu_ptr(kvm_hyp_ctxt);
+#else
+	return __hyp_this_cpu_ptr(kvm_host_ctxt);
+#endif
+}
+
 #define read_sysreg_elx(r,nvh,vh)					\
 	({								\
 		u64 reg;						\
@@ -100,7 +111,7 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
 void deactivate_traps_vhe_put(void);
 #endif
 
-u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
+u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *hyp_ctxt);
 
 void __vaxorcize_serror(void);
 
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index cbe6a0def777..9e78f79c42c0 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -108,7 +108,6 @@ int main(void)
   DEFINE(CPU_APDAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
   DEFINE(CPU_APDBKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
   DEFINE(CPU_APGAKEYLO_EL1,	offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
-  DEFINE(HOST_DATA_CONTEXT,	offsetof(struct kvm_host_data, host_ctxt));
 #endif
 #ifdef CONFIG_CPU_PM
   DEFINE(CPU_CTX_SP,		offsetof(struct cpu_suspend_ctx, sp));
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index c75d74adfc8b..70b5b320587e 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -70,10 +70,11 @@ KVM_NVHE_ALIAS(kvm_update_va_mask);
 
 /* Global kernel state accessed by nVHE hyp code. */
 KVM_NVHE_ALIAS(arm64_ssbd_callback_required);
-KVM_NVHE_ALIAS(kvm_host_data);
 KVM_NVHE_ALIAS(kvm_host_pmscr_el1);
 KVM_NVHE_ALIAS(kvm_host_vcpu);
+KVM_NVHE_ALIAS(kvm_hyp_ctxt);
 KVM_NVHE_ALIAS(kvm_hyp_running_vcpu);
+KVM_NVHE_ALIAS(kvm_pmu_events);
 KVM_NVHE_ALIAS(kvm_vgic_global_state);
 
 /* Kernel constant needed to compute idmap addresses. */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index cb8ac29195be..1e659bebe471 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -46,8 +46,10 @@
 __asm__(".arch_extension	virt");
 #endif
 
-DEFINE_PER_CPU(kvm_host_data_t, kvm_host_data);
+DEFINE_PER_CPU(struct kvm_cpu_context, kvm_host_ctxt);
+DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
 DEFINE_PER_CPU(struct kvm_vcpu, kvm_host_vcpu);
+DEFINE_PER_CPU(struct kvm_pmu_events, kvm_pmu_events);
 DEFINE_PER_CPU(struct kvm_vcpu *, kvm_hyp_running_vcpu);
 DEFINE_PER_CPU(struct kvm_guest_debug_arch, kvm_host_debug_state);
 DEFINE_PER_CPU(u64, kvm_host_pmscr_el1);
@@ -1277,8 +1279,8 @@ static void cpu_init_hyp_mode(void)
 	 * kernel's mapping to the linear mapping, and store it in tpidr_el2
 	 * so that we can use adr_l to access per-cpu variables in EL2.
 	 */
-	tpidr_el2 = ((unsigned long)this_cpu_ptr(&kvm_host_data) -
-		     (unsigned long)kvm_ksym_ref(&kvm_host_data));
+	tpidr_el2 = ((unsigned long)this_cpu_ptr(&kvm_hyp_running_vcpu) -
+		     (unsigned long)kvm_ksym_ref(&kvm_hyp_running_vcpu));
 
 	pgd_ptr = kvm_mmu_get_httbr();
 	hyp_stack_ptr = __this_cpu_read(kvm_arm_hyp_stack_page) + PAGE_SIZE;
@@ -1316,14 +1318,14 @@ static void cpu_hyp_reset(void)
 
 static void cpu_hyp_reinit(void)
 {
-	kvm_init_host_cpu_context(&this_cpu_ptr(&kvm_host_data)->host_ctxt);
-
 	cpu_hyp_reset();
 
-	if (is_kernel_in_hyp_mode())
+	if (is_kernel_in_hyp_mode()) {
+		kvm_init_host_cpu_context(this_cpu_ptr(&kvm_host_ctxt));
 		kvm_timer_init_vhe();
-	else
+	} else {
 		cpu_init_hyp_mode();
+	}
 
 	kvm_arm_init_debug();
 
@@ -1549,16 +1551,17 @@ static int init_hyp_mode(void)
 	}
 
 	for_each_possible_cpu(cpu) {
-		kvm_host_data_t *cpu_data;
+		struct kvm_cpu_context *hyp_ctxt;
 		struct kvm_vcpu *host_vcpu;
 		u64 *host_pmscr;
 		struct kvm_vcpu **running_vcpu;
+		struct kvm_pmu_events *pmu;
 
-		cpu_data = per_cpu_ptr(&kvm_host_data, cpu);
-		err = create_hyp_mappings(cpu_data, cpu_data + 1, PAGE_HYP);
+		hyp_ctxt = per_cpu_ptr(&kvm_hyp_ctxt, cpu);
+		err = create_hyp_mappings(hyp_ctxt, hyp_ctxt + 1, PAGE_HYP);
 
 		if (err) {
-			kvm_err("Cannot map host CPU state: %d\n", err);
+			kvm_err("Cannot map hyp CPU context: %d\n", err);
 			goto out_err;
 		}
 
@@ -1585,6 +1588,14 @@ static int init_hyp_mode(void)
 			kvm_err("Cannot map running vCPU: %d\n", err);
 			goto out_err;
 		}
+
+		pmu = per_cpu_ptr(&kvm_pmu_events, cpu);
+		err = create_hyp_mappings(pmu, pmu + 1, PAGE_HYP);
+
+		if (err) {
+			kvm_err("Cannot map PMU events: %d\n", err);
+			goto out_err;
+		}
 	}
 
 	err = hyp_map_aux_data();
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 63d484059c01..6c3a6b27a96c 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -19,18 +19,18 @@
 
 /*
  * u64 __guest_enter(struct kvm_vcpu *vcpu,
- *		     struct kvm_cpu_context *host_ctxt);
+ *		     struct kvm_cpu_context *hyp_ctxt);
  */
 SYM_FUNC_START(__guest_enter)
 	// x0: vcpu
-	// x1: host context
+	// x1: hyp context
 	// x2-x17: clobbered by macros
 	// x29: guest context
 
-	// Store the host regs
+	// Store the hyp regs
 	save_callee_saved_regs x1
 
-	// Save the host's sp_el0
+	// Save the hyp's sp_el0
 	save_sp_el0	x1, x2
 
 	// Now the hyp state is stored if we have a pending RAS SError it must
@@ -113,7 +113,7 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
 	// Store the guest's sp_el0
 	save_sp_el0	x1, x2
 
-	get_host_ctxt	x2, x3
+	get_hyp_ctxt	x2, x3
 
 	// Macro ptrauth_switch_to_guest format:
 	// 	ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3)
@@ -122,10 +122,10 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
 	// when this feature is enabled for kernel code.
 	ptrauth_switch_to_host x1, x2, x3, x4, x5
 
-	// Restore the hosts's sp_el0
+	// Restore the hyp's sp_el0
 	restore_sp_el0 x2, x3
 
-	// Now restore the host regs
+	// Now restore the hyp regs
 	restore_callee_saved_regs x2
 
 alternative_if ARM64_HAS_RAS_EXTN
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index fa62c2b21b4b..4b1e51b8bb82 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -27,6 +27,8 @@
 #include <asm/processor.h>
 #include <asm/thread_info.h>
 
+DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
+
 extern const char __hyp_panic_string[];
 
 /* Check whether the FP regs were dirtied while in the host-side run loop: */
@@ -382,7 +384,7 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
 	    !esr_is_ptrauth_trap(kvm_vcpu_get_esr(vcpu)))
 		return false;
 
-	ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
+	ctxt = get_hyp_ctxt();
 	__ptrauth_save_key(ctxt, APIA);
 	__ptrauth_save_key(ctxt, APIB);
 	__ptrauth_save_key(ctxt, APDA);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-start.S b/arch/arm64/kvm/hyp/nvhe/hyp-start.S
index d7744dcfd184..faec05ae3d3b 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-start.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-start.S
@@ -29,7 +29,7 @@ alternative_else_nop_endif
 .endm
 
 SYM_CODE_START(__kvm_hyp_start)
-	get_host_ctxt	x0, x1
+	get_hyp_ctxt	x0, x1
 
 	ptrauth_hyp_ctxt_init x0, x1, x2, x3
 
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index e49967c915de..f9a35ca02ad1 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -114,7 +114,7 @@ static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu)
  */
 static void __pmu_switch_to_guest(void)
 {
-	struct kvm_pmu_events *pmu = &__hyp_this_cpu_ptr(kvm_host_data)->pmu_events;
+	struct kvm_pmu_events *pmu = __hyp_this_cpu_ptr(kvm_pmu_events);
 
 	if (pmu->events_host)
 		write_sysreg(pmu->events_host, pmcntenclr_el0);
@@ -128,7 +128,7 @@ static void __pmu_switch_to_guest(void)
  */
 static void __pmu_switch_to_host(void)
 {
-	struct kvm_pmu_events *pmu = &__hyp_this_cpu_ptr(kvm_host_data)->pmu_events;
+	struct kvm_pmu_events *pmu = __hyp_this_cpu_ptr(kvm_pmu_events);
 
 	if (pmu->events_guest)
 		write_sysreg(pmu->events_guest, pmcntenclr_el0);
@@ -229,11 +229,11 @@ static void __vcpu_switch_to(struct kvm_vcpu *vcpu)
 /* Switch to the guest for legacy non-VHE systems */
 int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
-	struct kvm_cpu_context *host_ctxt;
+	struct kvm_cpu_context *hyp_ctxt;
 	struct kvm_vcpu *running_vcpu;
 	u64 exit_code;
 
-	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
+	hyp_ctxt = __hyp_this_cpu_ptr(kvm_hyp_ctxt);
 	running_vcpu = __hyp_this_cpu_read(kvm_hyp_running_vcpu);
 
 	if (running_vcpu != vcpu) {
@@ -255,7 +255,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	do {
 		/* Jump in the fire! */
-		exit_code = __guest_enter(vcpu, host_ctxt);
+		exit_code = __guest_enter(vcpu, hyp_ctxt);
 
 		/* And we're baaack! */
 	} while (fixup_guest_exit(vcpu, &exit_code));
diff --git a/arch/arm64/kvm/hyp/vhe/debug-sr.c b/arch/arm64/kvm/hyp/vhe/debug-sr.c
index a564831726e7..7a68e1215277 100644
--- a/arch/arm64/kvm/hyp/vhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/debug-sr.c
@@ -15,7 +15,7 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_guest_debug_arch *host_dbg;
 
-	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
+	host_ctxt = __hyp_this_cpu_ptr(kvm_host_ctxt);
 	host_dbg = __hyp_this_cpu_ptr(kvm_host_debug_state);
 
 	__debug_switch_to_guest_common(vcpu, host_dbg, host_ctxt);
@@ -26,7 +26,7 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu)
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_guest_debug_arch *host_dbg;
 
-	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
+	host_ctxt = __hyp_this_cpu_ptr(kvm_host_ctxt);
 	host_dbg = __hyp_this_cpu_ptr(kvm_host_debug_state);
 
 	__debug_switch_to_host_common(vcpu, host_dbg, host_ctxt);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 5c2eaf889cf5..daf40004e93d 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -108,7 +108,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	struct kvm_cpu_context *guest_ctxt;
 	u64 exit_code;
 
-	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
+	host_ctxt = __hyp_this_cpu_ptr(kvm_host_ctxt);
 	*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = vcpu;
 	guest_ctxt = &vcpu->arch.ctxt;
 
@@ -209,7 +209,7 @@ NOKPROBE_SYMBOL(__hyp_call_panic);
 
 void __noreturn hyp_panic(void)
 {
-	struct kvm_cpu_context *host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
+	struct kvm_cpu_context *host_ctxt = __hyp_this_cpu_ptr(kvm_host_ctxt);
 	u64 spsr = read_sysreg_el2(SYS_SPSR);
 	u64 elr = read_sysreg_el2(SYS_ELR);
 	u64 par = read_sysreg(par_el1);
diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
index 996471e4c138..923e10f25b5e 100644
--- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c
@@ -66,7 +66,7 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
 	struct kvm_cpu_context *host_ctxt;
 
-	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
+	host_ctxt = __hyp_this_cpu_ptr(kvm_host_ctxt);
 	__sysreg_save_user_state(host_ctxt);
 
 	/*
@@ -100,7 +100,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
 	struct kvm_cpu_context *host_ctxt;
 
-	host_ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt;
+	host_ctxt = __hyp_this_cpu_ptr(kvm_host_ctxt);
 	deactivate_traps_vhe_put();
 
 	__sysreg_save_el1_state(guest_ctxt);
diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c
index b5ae3a5d509e..f578192476db 100644
--- a/arch/arm64/kvm/pmu.c
+++ b/arch/arm64/kvm/pmu.c
@@ -31,15 +31,15 @@ static bool kvm_pmu_switch_needed(struct perf_event_attr *attr)
  */
 void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr)
 {
-	struct kvm_host_data *ctx = this_cpu_ptr(&kvm_host_data);
+	struct kvm_pmu_events *pmu = this_cpu_ptr(&kvm_pmu_events);
 
 	if (!kvm_pmu_switch_needed(attr))
 		return;
 
 	if (!attr->exclude_host)
-		ctx->pmu_events.events_host |= set;
+		pmu->events_host |= set;
 	if (!attr->exclude_guest)
-		ctx->pmu_events.events_guest |= set;
+		pmu->events_guest |= set;
 }
 
 /*
@@ -47,10 +47,10 @@ void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr)
  */
 void kvm_clr_pmu_events(u32 clr)
 {
-	struct kvm_host_data *ctx = this_cpu_ptr(&kvm_host_data);
+	struct kvm_pmu_events *pmu = this_cpu_ptr(&kvm_pmu_events);
 
-	ctx->pmu_events.events_host &= ~clr;
-	ctx->pmu_events.events_guest &= ~clr;
+	pmu->events_host &= ~clr;
+	pmu->events_guest &= ~clr;
 }
 
 #define PMEVTYPER_READ_CASE(idx)				\
@@ -163,15 +163,15 @@ static void kvm_vcpu_pmu_disable_el0(unsigned long events)
  */
 void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu)
 {
-	struct kvm_host_data *host;
+	struct kvm_pmu_events *pmu;
 	u32 events_guest, events_host;
 
 	if (!has_vhe())
 		return;
 
-	host = this_cpu_ptr(&kvm_host_data);
-	events_guest = host->pmu_events.events_guest;
-	events_host = host->pmu_events.events_host;
+	pmu = this_cpu_ptr(&kvm_pmu_events);
+	events_guest = pmu->events_guest;
+	events_host = pmu->events_host;
 
 	kvm_vcpu_pmu_enable_el0(events_guest);
 	kvm_vcpu_pmu_disable_el0(events_host);
@@ -182,15 +182,15 @@ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu)
  */
 void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu)
 {
-	struct kvm_host_data *host;
+	struct kvm_pmu_events *pmu;
 	u32 events_guest, events_host;
 
 	if (!has_vhe())
 		return;
 
-	host = this_cpu_ptr(&kvm_host_data);
-	events_guest = host->pmu_events.events_guest;
-	events_host = host->pmu_events.events_host;
+	pmu = this_cpu_ptr(&kvm_pmu_events);
+	events_guest = pmu->events_guest;
+	events_host = pmu->events_host;
 
 	kvm_vcpu_pmu_enable_el0(events_host);
 	kvm_vcpu_pmu_disable_el0(events_guest);
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 25/37] KVM: arm64: nVHE: Unify sysreg state saving paths
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (23 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 24/37] KVM: arm64: Break apart kvm_host_data Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 26/37] KVM: arm64: nVHE: Unify 32-bit sysreg " Andrew Scull
                   ` (11 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

The sysreg state is stored in the vcpu for the guests and the host so a
common path can be used to save the sysreg state.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index f9a35ca02ad1..0cabb8ce7d68 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -155,8 +155,6 @@ static void __kvm_vcpu_switch_to_guest(struct kvm_vcpu *host_vcpu,
 
 	__pmu_switch_to_guest();
 
-	__sysreg_save_state_nvhe(&host_vcpu->arch.ctxt);
-
 	/*
 	 * We must restore the 32-bit state before the sysregs, thanks
 	 * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72).
@@ -182,7 +180,6 @@ static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 {
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
 
-	__sysreg_save_state_nvhe(guest_ctxt);
 	__sysreg32_save_state(vcpu);
 	__timer_disable_traps(vcpu);
 	__hyp_vgic_save_state(vcpu);
@@ -208,7 +205,12 @@ static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 		gic_write_pmr(GIC_PRIO_IRQOFF);
 }
 
-static void __vcpu_switch_to(struct kvm_vcpu *vcpu)
+static void __vcpu_save_state(struct kvm_vcpu *vcpu)
+{
+	__sysreg_save_state_nvhe(&vcpu->arch.ctxt);
+}
+
+static void __vcpu_restore_state(struct kvm_vcpu *vcpu)
 {
 	struct kvm_vcpu *running_vcpu;
 
@@ -248,7 +250,8 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 			return ARM_EXCEPTION_IRQ;
 		}
 
-		__vcpu_switch_to(vcpu);
+		__vcpu_save_state(running_vcpu);
+		__vcpu_restore_state(vcpu);
 	}
 
 	__set_vcpu_arch_workaround_state(vcpu);
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 26/37] KVM: arm64: nVHE: Unify 32-bit sysreg saving paths
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (24 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 25/37] KVM: arm64: nVHE: Unify sysreg state saving paths Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 27/37] KVM: arm64: nVHE: Unify vgic save and restore Andrew Scull
                   ` (10 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

The 32-bit sysreg saving process can be made common between switches to
a guest or to the host because the host vcpu is not 32-bit so the saving
will be skipped.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 0cabb8ce7d68..a23eba0ccd3e 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -180,7 +180,6 @@ static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 {
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
 
-	__sysreg32_save_state(vcpu);
 	__timer_disable_traps(vcpu);
 	__hyp_vgic_save_state(vcpu);
 
@@ -208,6 +207,7 @@ static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 static void __vcpu_save_state(struct kvm_vcpu *vcpu)
 {
 	__sysreg_save_state_nvhe(&vcpu->arch.ctxt);
+	__sysreg32_save_state(vcpu);
 }
 
 static void __vcpu_restore_state(struct kvm_vcpu *vcpu)
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 27/37] KVM: arm64: nVHE: Unify vgic save and restore
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (25 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 26/37] KVM: arm64: nVHE: Unify 32-bit sysreg " Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 28/37] KVM: arm64: nVHE: Unify fpexc32 saving paths Andrew Scull
                   ` (9 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

If interrupts are trapped for the vcpu, save and restore the vgic state.
Interrupts are not trapped for the host vcpu so the vgic will remain
untouched.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index a23eba0ccd3e..eb10579174f2 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -94,6 +94,9 @@ static void __deactivate_vm(struct kvm_vcpu *vcpu)
 /* Save VGICv3 state on non-VHE systems */
 static void __hyp_vgic_save_state(struct kvm_vcpu *vcpu)
 {
+	if (vcpu->arch.ctxt.is_host)
+		return;
+
 	if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) {
 		__vgic_v3_save_state(&vcpu->arch.vgic_cpu.vgic_v3);
 		__vgic_v3_deactivate_traps(&vcpu->arch.vgic_cpu.vgic_v3);
@@ -103,6 +106,9 @@ static void __hyp_vgic_save_state(struct kvm_vcpu *vcpu)
 /* Restore VGICv3 state on non_VEH systems */
 static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu)
 {
+	if (vcpu->arch.ctxt.is_host)
+		return;
+
 	if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) {
 		__vgic_v3_activate_traps(&vcpu->arch.vgic_cpu.vgic_v3);
 		__vgic_v3_restore_state(&vcpu->arch.vgic_cpu.vgic_v3);
@@ -169,7 +175,6 @@ static void __kvm_vcpu_switch_to_guest(struct kvm_vcpu *host_vcpu,
 	__activate_vm(kern_hyp_va(vcpu->arch.hw_mmu));
 	__activate_traps(vcpu);
 
-	__hyp_vgic_restore_state(vcpu);
 	__timer_enable_traps(vcpu);
 
 	__debug_switch_to_guest(host_vcpu, vcpu);
@@ -181,7 +186,6 @@ static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
 
 	__timer_disable_traps(vcpu);
-	__hyp_vgic_save_state(vcpu);
 
 	__deactivate_traps(host_vcpu, vcpu);
 	__deactivate_vm(vcpu);
@@ -208,6 +212,7 @@ static void __vcpu_save_state(struct kvm_vcpu *vcpu)
 {
 	__sysreg_save_state_nvhe(&vcpu->arch.ctxt);
 	__sysreg32_save_state(vcpu);
+	__hyp_vgic_save_state(vcpu);
 }
 
 static void __vcpu_restore_state(struct kvm_vcpu *vcpu)
@@ -225,6 +230,8 @@ static void __vcpu_restore_state(struct kvm_vcpu *vcpu)
 	else
 		__kvm_vcpu_switch_to_guest(running_vcpu, vcpu);
 
+	__hyp_vgic_restore_state(vcpu);
+
 	*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = vcpu;
 }
 
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 28/37] KVM: arm64: nVHE: Unify fpexc32 saving paths
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (26 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 27/37] KVM: arm64: nVHE: Unify vgic save and restore Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 29/37] KVM: arm64: nVHE: Separate the save and restore of debug state Andrew Scull
                   ` (8 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

The host vcpu is not 32-bit so this will be skipped but 32-bit guests
will save this state.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 3 +++
 arch/arm64/kvm/hyp/nvhe/switch.c        | 5 ++---
 arch/arm64/kvm/hyp/vhe/switch.c         | 3 +--
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 4b1e51b8bb82..18a8ecc2e3a6 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -52,6 +52,9 @@ static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
 /* Save the 32-bit only FPSIMD system register state */
 static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu)
 {
+	if (!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED))
+		return;
+
 	if (!vcpu_el1_is_32bit(vcpu))
 		return;
 
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index eb10579174f2..7fdf1a9ab47e 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -192,9 +192,6 @@ static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 
 	__sysreg_restore_state_nvhe(&host_vcpu->arch.ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
-		__fpsimd_save_fpexc32(vcpu);
-
 	/*
 	 * This must come after restoring the host sysregs, since a non-VHE
 	 * system may enable SPE here and make use of the TTBRs.
@@ -213,6 +210,8 @@ static void __vcpu_save_state(struct kvm_vcpu *vcpu)
 	__sysreg_save_state_nvhe(&vcpu->arch.ctxt);
 	__sysreg32_save_state(vcpu);
 	__hyp_vgic_save_state(vcpu);
+
+	__fpsimd_save_fpexc32(vcpu);
 }
 
 static void __vcpu_restore_state(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index daf40004e93d..3c475cc83a2d 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -148,8 +148,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 
 	sysreg_restore_host_state_vhe(host_ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
-		__fpsimd_save_fpexc32(vcpu);
+	__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
 
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 29/37] KVM: arm64: nVHE: Separate the save and restore of debug state
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (27 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 28/37] KVM: arm64: nVHE: Unify fpexc32 saving paths Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 30/37] KVM: arm64: nVHE: Remove MMU assumption in speculative AT workaround Andrew Scull
                   ` (7 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

The primitives for save and restore are already available. SPE is always
switched but the debug state is only switched if both vcpus are using
the debug registers, as indicated by KVM_ARM64_DEBUG_DIRTY. The host
vcpu is marked as always using the debug registers.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_hyp.h          |  4 +--
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h | 43 +++--------------------
 arch/arm64/kvm/hyp/nvhe/debug-sr.c        | 40 +++++++--------------
 arch/arm64/kvm/hyp/nvhe/switch.c          | 38 +++++++++++++-------
 arch/arm64/kvm/hyp/vhe/debug-sr.c         | 22 ++++++++++--
 5 files changed, 65 insertions(+), 82 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 38d49f9b56c7..d4d366e0d78d 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -96,8 +96,8 @@ void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt);
 #endif
 
 #ifdef __KVM_NVHE_HYPERVISOR__
-void __debug_switch_to_guest(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu);
-void __debug_switch_to_host(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu);
+void __debug_save_spe(struct kvm_vcpu *vcpu);
+void __debug_restore_spe(struct kvm_vcpu *vcpu);
 #else
 void __debug_switch_to_guest(struct kvm_vcpu *vcpu);
 void __debug_switch_to_host(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
index 0d342418d706..b02d6ffb4129 100644
--- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
@@ -88,8 +88,8 @@
 	default:	write_debug(ptr[0], reg, 0);			\
 	}
 
-static void __debug_save_state(struct kvm_guest_debug_arch *dbg,
-			       struct kvm_cpu_context *ctxt)
+static inline void __debug_save_state(struct kvm_guest_debug_arch *dbg,
+				      struct kvm_cpu_context *ctxt)
 {
 	u64 aa64dfr0;
 	int brps, wrps;
@@ -106,8 +106,8 @@ static void __debug_save_state(struct kvm_guest_debug_arch *dbg,
 	ctxt_sys_reg(ctxt, MDCCINT_EL1) = read_sysreg(mdccint_el1);
 }
 
-static void __debug_restore_state(struct kvm_guest_debug_arch *dbg,
-				  struct kvm_cpu_context *ctxt)
+static inline void __debug_restore_state(struct kvm_guest_debug_arch *dbg,
+					 struct kvm_cpu_context *ctxt)
 {
 	u64 aa64dfr0;
 	int brps, wrps;
@@ -124,39 +124,4 @@ static void __debug_restore_state(struct kvm_guest_debug_arch *dbg,
 
 	write_sysreg(ctxt_sys_reg(ctxt, MDCCINT_EL1), mdccint_el1);
 }
-
-static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu,
-						  struct kvm_guest_debug_arch *host_dbg,
-						  struct kvm_cpu_context *host_ctxt)
-{
-	struct kvm_cpu_context *guest_ctxt;
-	struct kvm_guest_debug_arch *guest_dbg;
-
-	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
-		return;
-
-	guest_ctxt = &vcpu->arch.ctxt;
-	guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr);
-
-	__debug_save_state(host_dbg, host_ctxt);
-	__debug_restore_state(guest_dbg, guest_ctxt);
-}
-
-static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu,
-						 struct kvm_guest_debug_arch *host_dbg,
-						 struct kvm_cpu_context *host_ctxt)
-{
-	struct kvm_cpu_context *guest_ctxt;
-	struct kvm_guest_debug_arch *guest_dbg;
-
-	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
-		return;
-
-	guest_ctxt = &vcpu->arch.ctxt;
-	guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr);
-
-	__debug_save_state(guest_dbg, guest_ctxt);
-	__debug_restore_state(host_dbg, host_ctxt);
-}
-
 #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index 4bf0eeb41a44..e344be6fa30a 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -14,11 +14,16 @@
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
 
-static void __debug_save_spe(u64 *pmscr_el1)
+void __debug_save_spe(struct kvm_vcpu *vcpu)
 {
 	u64 reg;
+	u64 *pmscr_el1;
+
+	if (!vcpu->arch.ctxt.is_host)
+		return;
 
 	/* Clear pmscr in case of early return */
+	pmscr_el1 = __hyp_this_cpu_ptr(kvm_host_pmscr_el1);
 	*pmscr_el1 = 0;
 
 	/* SPE present on this CPU? */
@@ -46,8 +51,14 @@ static void __debug_save_spe(u64 *pmscr_el1)
 	dsb(nsh);
 }
 
-static void __debug_restore_spe(u64 pmscr_el1)
+void __debug_restore_spe(struct kvm_vcpu *vcpu)
 {
+	u64 pmscr_el1;
+
+	if (!vcpu->arch.ctxt.is_host)
+		return;
+
+	pmscr_el1 = __hyp_this_cpu_read(kvm_host_pmscr_el1);
 	if (!pmscr_el1)
 		return;
 
@@ -58,31 +69,6 @@ static void __debug_restore_spe(u64 pmscr_el1)
 	write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1);
 }
 
-void __debug_switch_to_guest(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu)
-{
-	struct kvm_cpu_context *host_ctxt;
-	struct kvm_guest_debug_arch *host_dbg;
-
-	host_ctxt = &host_vcpu->arch.ctxt;
-	host_dbg = host_vcpu->arch.debug_ptr;
-
-	/* Disable and flush SPE data generation */
-	__debug_save_spe(__hyp_this_cpu_ptr(kvm_host_pmscr_el1));
-	__debug_switch_to_guest_common(vcpu, host_dbg, host_ctxt);
-}
-
-void __debug_switch_to_host(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu)
-{
-	struct kvm_cpu_context *host_ctxt;
-	struct kvm_guest_debug_arch *host_dbg;
-
-	host_ctxt = &host_vcpu->arch.ctxt;
-	host_dbg = host_vcpu->arch.debug_ptr;
-
-	__debug_restore_spe(__hyp_this_cpu_read(kvm_host_pmscr_el1));
-	__debug_switch_to_host_common(vcpu, host_dbg, host_ctxt);
-}
-
 u32 __kvm_get_mdcr_el2(void)
 {
 	return read_sysreg(mdcr_el2);
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 7fdf1a9ab47e..25c64392356b 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -4,6 +4,7 @@
  * Author: Marc Zyngier <marc.zyngier@arm.com>
  */
 
+#include <hyp/debug-sr.h>
 #include <hyp/switch.h>
 #include <hyp/sysreg-sr.h>
 
@@ -176,8 +177,6 @@ static void __kvm_vcpu_switch_to_guest(struct kvm_vcpu *host_vcpu,
 	__activate_traps(vcpu);
 
 	__timer_enable_traps(vcpu);
-
-	__debug_switch_to_guest(host_vcpu, vcpu);
 }
 
 static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
@@ -192,12 +191,6 @@ static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 
 	__sysreg_restore_state_nvhe(&host_vcpu->arch.ctxt);
 
-	/*
-	 * This must come after restoring the host sysregs, since a non-VHE
-	 * system may enable SPE here and make use of the TTBRs.
-	 */
-	__debug_switch_to_host(host_vcpu, vcpu);
-
 	__pmu_switch_to_host();
 
 	/* Returning to host will clear PSR.I, remask PMR if needed */
@@ -205,16 +198,22 @@ static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 		gic_write_pmr(GIC_PRIO_IRQOFF);
 }
 
-static void __vcpu_save_state(struct kvm_vcpu *vcpu)
+static void __vcpu_save_state(struct kvm_vcpu *vcpu, bool save_debug)
 {
 	__sysreg_save_state_nvhe(&vcpu->arch.ctxt);
 	__sysreg32_save_state(vcpu);
 	__hyp_vgic_save_state(vcpu);
 
 	__fpsimd_save_fpexc32(vcpu);
+
+	__debug_save_spe(vcpu);
+
+	if (save_debug)
+		__debug_save_state(kern_hyp_va(vcpu->arch.debug_ptr),
+				   &vcpu->arch.ctxt);
 }
 
-static void __vcpu_restore_state(struct kvm_vcpu *vcpu)
+static void __vcpu_restore_state(struct kvm_vcpu *vcpu, bool restore_debug)
 {
 	struct kvm_vcpu *running_vcpu;
 
@@ -231,6 +230,16 @@ static void __vcpu_restore_state(struct kvm_vcpu *vcpu)
 
 	__hyp_vgic_restore_state(vcpu);
 
+	/*
+	 * This must come after restoring the sysregs since SPE may make use if
+	 * the TTBRs.
+	 */
+	__debug_restore_spe(vcpu);
+
+	if (restore_debug)
+		__debug_restore_state(kern_hyp_va(vcpu->arch.debug_ptr),
+				      &vcpu->arch.ctxt);
+
 	*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = vcpu;
 }
 
@@ -245,6 +254,8 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	running_vcpu = __hyp_this_cpu_read(kvm_hyp_running_vcpu);
 
 	if (running_vcpu != vcpu) {
+		bool switch_debug;
+
 		if (!running_vcpu->arch.ctxt.is_host &&
 		    !vcpu->arch.ctxt.is_host) {
 			/*
@@ -256,8 +267,11 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 			return ARM_EXCEPTION_IRQ;
 		}
 
-		__vcpu_save_state(running_vcpu);
-		__vcpu_restore_state(vcpu);
+		switch_debug = (vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) &&
+			(running_vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY);
+
+		__vcpu_save_state(running_vcpu, switch_debug);
+		__vcpu_restore_state(vcpu, switch_debug);
 	}
 
 	__set_vcpu_arch_workaround_state(vcpu);
diff --git a/arch/arm64/kvm/hyp/vhe/debug-sr.c b/arch/arm64/kvm/hyp/vhe/debug-sr.c
index 7a68e1215277..a1462a8f16b1 100644
--- a/arch/arm64/kvm/hyp/vhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/vhe/debug-sr.c
@@ -13,23 +13,41 @@
 void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *host_ctxt;
+	struct kvm_cpu_context *guest_ctxt;
 	struct kvm_guest_debug_arch *host_dbg;
+	struct kvm_guest_debug_arch *guest_dbg;
+
+	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
+		return;
 
 	host_ctxt = __hyp_this_cpu_ptr(kvm_host_ctxt);
 	host_dbg = __hyp_this_cpu_ptr(kvm_host_debug_state);
 
-	__debug_switch_to_guest_common(vcpu, host_dbg, host_ctxt);
+	guest_ctxt = &vcpu->arch.ctxt;
+	guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr);
+
+	__debug_save_state(host_dbg, host_ctxt);
+	__debug_restore_state(guest_dbg, guest_ctxt);
 }
 
 void __debug_switch_to_host(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *host_ctxt;
+	struct kvm_cpu_context *guest_ctxt;
 	struct kvm_guest_debug_arch *host_dbg;
+	struct kvm_guest_debug_arch *guest_dbg;
+
+	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
+		return;
 
 	host_ctxt = __hyp_this_cpu_ptr(kvm_host_ctxt);
 	host_dbg = __hyp_this_cpu_ptr(kvm_host_debug_state);
 
-	__debug_switch_to_host_common(vcpu, host_dbg, host_ctxt);
+	guest_ctxt = &vcpu->arch.ctxt;
+	guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr);
+
+	__debug_save_state(guest_dbg, guest_ctxt);
+	__debug_restore_state(host_dbg, host_ctxt);
 }
 
 u32 __kvm_get_mdcr_el2(void)
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 30/37] KVM: arm64: nVHE: Remove MMU assumption in speculative AT workaround
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (28 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 29/37] KVM: arm64: nVHE: Separate the save and restore of debug state Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 31/37] KVM: arm64: Move speculative AT ISBs into context Andrew Scull
                   ` (6 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

Rather than making an assumption about the state of the host's MMU,
always force it on. This starts reducing trust and dependence on the
host state and sets the stage for a common path for both guests and the
host vcpu.

The EPDx bits must be set for the full duration that the MMU is being
enabled so that no S1 walks can occur if we are enabling an
uninitialized or unused MMU.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h |  6 ++++--
 arch/arm64/kvm/hyp/nvhe/tlb.c              | 15 +++++++++++----
 2 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index c55b2d17ada8..0c24c922bae8 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -83,13 +83,15 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
 	} else	if (!ctxt->is_host) {
 		/*
 		 * Must only be done for guest registers, hence the context
-		 * test. We're coming from the host, so SCTLR.M is already
-		 * set. Pairs with nVHE's __activate_traps().
+		 * test. Pairs with nVHE's __activate_traps().
 		 */
 		write_sysreg_el1((ctxt_sys_reg(ctxt, TCR_EL1) |
 				  TCR_EPD1_MASK | TCR_EPD0_MASK),
 				 SYS_TCR);
 		isb();
+		write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1] | SCTLR_ELx_M,
+				 SYS_SCTLR);
+		isb();
 	}
 
 	write_sysreg_el1(ctxt_sys_reg(ctxt, CPACR_EL1),	SYS_CPACR);
diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
index e5f65f0da106..16fa06ff0554 100644
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
@@ -10,6 +10,7 @@
 
 struct tlb_inv_context {
 	u64		tcr;
+	u64		sctlr;
 };
 
 static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu,
@@ -21,14 +22,18 @@ static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu,
 		/*
 		 * For CPUs that are affected by ARM 1319367, we need to
 		 * avoid a host Stage-1 walk while we have the guest's
-		 * VMID set in the VTTBR in order to invalidate TLBs.
-		 * We're guaranteed that the S1 MMU is enabled, so we can
-		 * simply set the EPD bits to avoid any further TLB fill.
+		 * VMID set in the VTTBR in order to invalidate TLBs. This
+		 * is done by setting the EPD bits in the TCR_EL1 register.
+		 * We also need to prevent TLB allocation from IPA->PA walks,
+		 * so we enable the S1 MMU.
 		 */
 		val = cxt->tcr = read_sysreg_el1(SYS_TCR);
 		val |= TCR_EPD1_MASK | TCR_EPD0_MASK;
 		write_sysreg_el1(val, SYS_TCR);
 		isb();
+		val = cxt->sctlr = read_sysreg_el1(SYS_SCTLR);
+		val |= SCTLR_ELx_M;
+		isb();
 	}
 
 	/* __load_guest_stage2() includes an ISB for the workaround. */
@@ -43,7 +48,9 @@ static void __tlb_switch_to_host(struct tlb_inv_context *cxt)
 	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
 		/* Ensure write of the host VMID */
 		isb();
-		/* Restore the host's TCR_EL1 */
+		/* Restore the host's SCTLR and then TCR_EL1 */
+		write_sysreg_el1(cxt->sctlr, SYS_SCTLR);
+		isb();
 		write_sysreg_el1(cxt->tcr, SYS_TCR);
 	}
 }
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 31/37] KVM: arm64: Move speculative AT ISBs into context
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (29 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 30/37] KVM: arm64: nVHE: Remove MMU assumption in speculative AT workaround Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 32/37] KVM: arm64: nVHE: Unify sysreg state restoration paths Andrew Scull
                   ` (5 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

VHE's workaround requires an ISB between the configuration of the S2
translation and the update of HCR_EL2. It's much easier to see when and
why this is happening if it all happens at the same place in the same
file.

nVHE's workaround does not require an immediate ISB after the
configuration of the S2 translation as the necessary synchronization
happens at a later stage.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_mmu.h | 7 -------
 arch/arm64/kvm/hyp/nvhe/tlb.c    | 2 +-
 arch/arm64/kvm/hyp/vhe/switch.c  | 1 +
 arch/arm64/kvm/hyp/vhe/tlb.c     | 4 ++--
 4 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 22157ded04ca..835d3fe2f781 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -596,13 +596,6 @@ static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu)
 {
 	write_sysreg(kern_hyp_va(mmu->kvm)->arch.vtcr, vtcr_el2);
 	write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
-
-	/*
-	 * ARM errata 1165522 and 1530923 require the actual execution of the
-	 * above before we can switch to the EL1/EL0 translation regime used by
-	 * the guest.
-	 */
-	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
 }
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
index 16fa06ff0554..2a0de9d67f00 100644
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
@@ -38,7 +38,7 @@ static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu,
 
 	/* __load_guest_stage2() includes an ISB for the workaround. */
 	__load_guest_stage2(mmu);
-	asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT));
+	isb();
 }
 
 static void __tlb_switch_to_host(struct tlb_inv_context *cxt)
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 3c475cc83a2d..04ee01774ea2 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -126,6 +126,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	 * (among other things).
 	 */
 	__activate_vm(vcpu->arch.hw_mmu);
+	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
 	__activate_traps(vcpu);
 
 	sysreg_restore_guest_state_vhe(guest_ctxt);
diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c
index fd7895945bbc..b9abd17c1180 100644
--- a/arch/arm64/kvm/hyp/vhe/tlb.c
+++ b/arch/arm64/kvm/hyp/vhe/tlb.c
@@ -50,10 +50,10 @@ static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu,
 	 *
 	 * ARM erratum 1165522 requires some special handling (again),
 	 * as we need to make sure both stages of translation are in
-	 * place before clearing TGE. __load_guest_stage2() already
-	 * has an ISB in order to deal with this.
+	 * place before clearing TGE.
 	 */
 	__load_guest_stage2(mmu);
+	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
 	val = read_sysreg(hcr_el2);
 	val &= ~HCR_TGE;
 	write_sysreg(val, hcr_el2);
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 32/37] KVM: arm64: nVHE: Unify sysreg state restoration paths
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (30 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 31/37] KVM: arm64: Move speculative AT ISBs into context Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 33/37] KVM: arm64: Remove __activate_vm wrapper Andrew Scull
                   ` (4 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

These sets of state are moved together as nVHE's speculative AT
workaround depends on their correct interaction. As a consequence of
this change, the workaround is much simpler as both the host and the
guests now share the same code path.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 48 ++++-------
 arch/arm64/kvm/hyp/nvhe/switch.c           | 96 +++++++++-------------
 2 files changed, 54 insertions(+), 90 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index 0c24c922bae8..cffe7dd3df41 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -75,25 +75,6 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
 {
 	write_sysreg(ctxt_sys_reg(ctxt, MPIDR_EL1),	vmpidr_el2);
 	write_sysreg(ctxt_sys_reg(ctxt, CSSELR_EL1),	csselr_el1);
-
-	if (has_vhe() ||
-	    !cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
-		write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1),	SYS_SCTLR);
-		write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1),	SYS_TCR);
-	} else	if (!ctxt->is_host) {
-		/*
-		 * Must only be done for guest registers, hence the context
-		 * test. Pairs with nVHE's __activate_traps().
-		 */
-		write_sysreg_el1((ctxt_sys_reg(ctxt, TCR_EL1) |
-				  TCR_EPD1_MASK | TCR_EPD0_MASK),
-				 SYS_TCR);
-		isb();
-		write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1] | SCTLR_ELx_M,
-				 SYS_SCTLR);
-		isb();
-	}
-
 	write_sysreg_el1(ctxt_sys_reg(ctxt, CPACR_EL1),	SYS_CPACR);
 	write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL1),	SYS_TTBR0);
 	write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL1),	SYS_TTBR1);
@@ -109,23 +90,24 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt_sys_reg(ctxt, PAR_EL1),	par_el1);
 	write_sysreg(ctxt_sys_reg(ctxt, TPIDR_EL1),	tpidr_el1);
 
-	if (!has_vhe() &&
-	    cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT) &&
-	    ctxt->is_host) {
+	if (!has_vhe()) {
 		/*
-		 * Must only be done for host registers, hence the context
-		 * test. Pairs with nVHE's __deactivate_traps().
+		 * For CPUs that are affected by the speculative AT errata,
+		 * ensure the vcpu's stage 1 and stage 2 translations have been
+		 * configured before before updating TCR_EL1 and SCTLR_EL1 to
+		 * potentially allow any speculative walks to occur. The stage
+		 * 2 will have already been configured by the nVHE switching
+		 * code before calling this function.
 		 */
-		isb();
-		/*
-		 * At this stage, and thanks to the above isb(), S2 is
-		 * deconfigured and disabled. We can now restore the host's
-		 * S1 configuration: SCTLR, and only then TCR.
-		 */
-		write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1),	SYS_SCTLR);
-		isb();
-		write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1),	SYS_TCR);
+		asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
+	}
+
+	write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1),	SYS_SCTLR);
+	if (!has_vhe()) {
+		/* Ensure S1 MMU state is restored before allowing S1 walks. */
+		asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
 	}
+	write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1),	SYS_TCR);
 
 	write_sysreg(ctxt_sys_reg(ctxt, SP_EL1),	sp_el1);
 	write_sysreg_el1(ctxt_sys_reg(ctxt, ELR_EL1),	SYS_ELR);
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 25c64392356b..c87b0a709d35 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -43,43 +43,12 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 	}
 
 	write_sysreg(val, cptr_el2);
-
-	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
-		struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
-
-		isb();
-		/*
-		 * At this stage, and thanks to the above isb(), S2 is
-		 * configured and enabled. We can now restore the guest's S1
-		 * configuration: SCTLR, and only then TCR.
-		 */
-		write_sysreg_el1(ctxt_sys_reg(ctxt, SCTLR_EL1),	SYS_SCTLR);
-		isb();
-		write_sysreg_el1(ctxt_sys_reg(ctxt, TCR_EL1),	SYS_TCR);
-	}
 }
 
 static void __deactivate_traps(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu)
 {
 	___deactivate_traps(vcpu);
 
-	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
-		u64 val;
-
-		/*
-		 * Set the TCR and SCTLR registers in the exact opposite
-		 * sequence as __activate_traps (first prevent walks,
-		 * then force the MMU on). A generous sprinkling of isb()
-		 * ensure that things happen in this exact order.
-		 */
-		val = read_sysreg_el1(SYS_TCR);
-		write_sysreg_el1(val | TCR_EPD1_MASK | TCR_EPD0_MASK, SYS_TCR);
-		isb();
-		val = read_sysreg_el1(SYS_SCTLR);
-		write_sysreg_el1(val | SCTLR_ELx_M, SYS_SCTLR);
-		isb();
-	}
-
 	__deactivate_traps_common();
 
 	write_sysreg(host_vcpu->arch.mdcr_el2, mdcr_el2);
@@ -87,9 +56,12 @@ static void __deactivate_traps(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu
 	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
 
-static void __deactivate_vm(struct kvm_vcpu *vcpu)
+static void __restore_stage2(struct kvm_vcpu *vcpu)
 {
-	write_sysreg(0, vttbr_el2);
+	if (vcpu->arch.hcr_el2 & HCR_VM)
+		__activate_vm(kern_hyp_va(vcpu->arch.hw_mmu));
+	else
+		write_sysreg(0, vttbr_el2);
 }
 
 /* Save VGICv3 state on non-VHE systems */
@@ -147,8 +119,6 @@ static void __pmu_switch_to_host(void)
 static void __kvm_vcpu_switch_to_guest(struct kvm_vcpu *host_vcpu,
 				       struct kvm_vcpu *vcpu)
 {
-	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
-
 	/*
 	 * Having IRQs masked via PMR when entering the guest means the GIC
 	 * will not signal the CPU of interrupts of lower priority, and the
@@ -162,35 +132,14 @@ static void __kvm_vcpu_switch_to_guest(struct kvm_vcpu *host_vcpu,
 
 	__pmu_switch_to_guest();
 
-	/*
-	 * We must restore the 32-bit state before the sysregs, thanks
-	 * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72).
-	 *
-	 * Also, and in order to be able to deal with erratum #1319537 (A57)
-	 * and #1319367 (A72), we must ensure that all VM-related sysreg are
-	 * restored before we enable S2 translation.
-	 */
-	__sysreg32_restore_state(vcpu);
-	__sysreg_restore_state_nvhe(guest_ctxt);
-
-	__activate_vm(kern_hyp_va(vcpu->arch.hw_mmu));
-	__activate_traps(vcpu);
-
 	__timer_enable_traps(vcpu);
 }
 
 static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 				      struct kvm_vcpu *vcpu)
 {
-	struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
-
 	__timer_disable_traps(vcpu);
 
-	__deactivate_traps(host_vcpu, vcpu);
-	__deactivate_vm(vcpu);
-
-	__sysreg_restore_state_nvhe(&host_vcpu->arch.ctxt);
-
 	__pmu_switch_to_host();
 
 	/* Returning to host will clear PSR.I, remask PMR if needed */
@@ -228,6 +177,39 @@ static void __vcpu_restore_state(struct kvm_vcpu *vcpu, bool restore_debug)
 	else
 		__kvm_vcpu_switch_to_guest(running_vcpu, vcpu);
 
+	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
+		u64 val;
+
+		/*
+		 * Set the TCR and SCTLR registers to prevent any stage 1 or 2
+		 * page table walks or TLB allocations. A generous sprinkling
+		 * of isb() ensure that things happen in this exact order.
+		 */
+		val = read_sysreg_el1(SYS_TCR);
+		write_sysreg_el1(val | TCR_EPD1_MASK | TCR_EPD0_MASK, SYS_TCR);
+		isb();
+		val = read_sysreg_el1(SYS_SCTLR);
+		write_sysreg_el1(val | SCTLR_ELx_M, SYS_SCTLR);
+		isb();
+	}
+
+	/*
+	 * We must restore the 32-bit state before the sysregs, thanks to
+	 * erratum #852523 (Cortex-A57) or #853709 (Cortex-A72).
+	 *
+	 * Also, and in order to deal with the speculative AT errata, we must
+	 * ensure the S2 translation is restored before allowing page table
+	 * walks and TLB allocations when the sysregs are restored.
+	 */
+	__restore_stage2(vcpu);
+	__sysreg32_restore_state(vcpu);
+	__sysreg_restore_state_nvhe(&vcpu->arch.ctxt);
+
+	if (vcpu->arch.ctxt.is_host)
+		__deactivate_traps(vcpu, running_vcpu);
+	else
+		__activate_traps(vcpu);
+
 	__hyp_vgic_restore_state(vcpu);
 
 	/*
@@ -300,7 +282,7 @@ void __noreturn hyp_panic(void)
 	if (vcpu != host_vcpu) {
 		__timer_disable_traps(vcpu);
 		__deactivate_traps(host_vcpu, vcpu);
-		__deactivate_vm(vcpu);
+		__restore_stage2(host_vcpu);
 		__sysreg_restore_state_nvhe(&host_vcpu->arch.ctxt);
 	}
 
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 33/37] KVM: arm64: Remove __activate_vm wrapper
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (31 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 32/37] KVM: arm64: nVHE: Unify sysreg state restoration paths Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 34/37] KVM: arm64: nVHE: Unify timer restore paths Andrew Scull
                   ` (3 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

This wrapper serves no useful function and has a misleading name as it
simply calls __load_guest_stage2 and does not touch HCR_EL2.VM

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h |  5 -----
 arch/arm64/kvm/hyp/nvhe/switch.c        |  2 +-
 arch/arm64/kvm/hyp/vhe/switch.c         | 10 +++++-----
 3 files changed, 6 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 18a8ecc2e3a6..84fd6b0601b2 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -127,11 +127,6 @@ static inline void ___deactivate_traps(struct kvm_vcpu *vcpu)
 	}
 }
 
-static inline void __activate_vm(struct kvm_s2_mmu *mmu)
-{
-	__load_guest_stage2(mmu);
-}
-
 static inline bool __translate_far_to_hpfar(u64 far, u64 *hpfar)
 {
 	u64 par, tmp;
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index c87b0a709d35..fa90fc776374 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -59,7 +59,7 @@ static void __deactivate_traps(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu
 static void __restore_stage2(struct kvm_vcpu *vcpu)
 {
 	if (vcpu->arch.hcr_el2 & HCR_VM)
-		__activate_vm(kern_hyp_va(vcpu->arch.hw_mmu));
+		__load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu));
 	else
 		write_sysreg(0, vttbr_el2);
 }
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 04ee01774ea2..bc372629e1c1 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -120,12 +120,12 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	 * HCR_EL2.TGE.
 	 *
 	 * We have already configured the guest's stage 1 translation in
-	 * kvm_vcpu_load_sysregs_vhe above.  We must now call __activate_vm
-	 * before __activate_traps, because __activate_vm configures
-	 * stage 2 translation, and __activate_traps clear HCR_EL2.TGE
-	 * (among other things).
+	 * kvm_vcpu_load_sysregs_vhe above.  We must now call
+	 * __load_guest_stage2 before __activate_traps, because
+	 * __load_guest_stage2 configures stage 2 translation, and
+	 * __activate_traps clear HCR_EL2.TGE (among other things).
 	 */
-	__activate_vm(vcpu->arch.hw_mmu);
+	__load_guest_stage2(vcpu->arch.hw_mmu);
 	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
 	__activate_traps(vcpu);
 
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 34/37] KVM: arm64: nVHE: Unify timer restore paths
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (32 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 33/37] KVM: arm64: Remove __activate_vm wrapper Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 35/37] KVM: arm64: nVHE: Unify PMU event restoration paths Andrew Scull
                   ` (2 subsequent siblings)
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

Different actions are taken depending on whether the vcpu is the host or
not but this choice could easily be changed to another property of the
vcpu, if required.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_hyp.h   |  3 +--
 arch/arm64/kvm/hyp/nvhe/switch.c   |  7 ++----
 arch/arm64/kvm/hyp/nvhe/timer-sr.c | 35 +++++++++++-------------------
 3 files changed, 16 insertions(+), 29 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index d4d366e0d78d..fdacb26ac475 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -81,8 +81,7 @@ void __vgic_v3_restore_aprs(struct vgic_v3_cpu_if *cpu_if);
 int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu);
 
 #ifdef __KVM_NVHE_HYPERVISOR__
-void __timer_enable_traps(struct kvm_vcpu *vcpu);
-void __timer_disable_traps(struct kvm_vcpu *vcpu);
+void __timer_restore_traps(struct kvm_vcpu *vcpu);
 #endif
 
 #ifdef __KVM_NVHE_HYPERVISOR__
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index fa90fc776374..05f1cf7ee9e7 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -131,15 +131,11 @@ static void __kvm_vcpu_switch_to_guest(struct kvm_vcpu *host_vcpu,
 	}
 
 	__pmu_switch_to_guest();
-
-	__timer_enable_traps(vcpu);
 }
 
 static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 				      struct kvm_vcpu *vcpu)
 {
-	__timer_disable_traps(vcpu);
-
 	__pmu_switch_to_host();
 
 	/* Returning to host will clear PSR.I, remask PMR if needed */
@@ -211,6 +207,7 @@ static void __vcpu_restore_state(struct kvm_vcpu *vcpu, bool restore_debug)
 		__activate_traps(vcpu);
 
 	__hyp_vgic_restore_state(vcpu);
+	__timer_restore_traps(vcpu);
 
 	/*
 	 * This must come after restoring the sysregs since SPE may make use if
@@ -280,7 +277,7 @@ void __noreturn hyp_panic(void)
 	unsigned long str_va;
 
 	if (vcpu != host_vcpu) {
-		__timer_disable_traps(vcpu);
+		__timer_restore_traps(host_vcpu);
 		__deactivate_traps(host_vcpu, vcpu);
 		__restore_stage2(host_vcpu);
 		__sysreg_restore_state_nvhe(&host_vcpu->arch.ctxt);
diff --git a/arch/arm64/kvm/hyp/nvhe/timer-sr.c b/arch/arm64/kvm/hyp/nvhe/timer-sr.c
index 9072e71693ba..914d2624467d 100644
--- a/arch/arm64/kvm/hyp/nvhe/timer-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/timer-sr.c
@@ -19,30 +19,21 @@ void __kvm_timer_set_cntvoff(u64 cntvoff)
  * Should only be called on non-VHE systems.
  * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe().
  */
-void __timer_disable_traps(struct kvm_vcpu *vcpu)
+void __timer_restore_traps(struct kvm_vcpu *vcpu)
 {
-	u64 val;
+	u64 val = read_sysreg(cnthctl_el2);
 
-	/* Allow physical timer/counter access for the host */
-	val = read_sysreg(cnthctl_el2);
-	val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN;
-	write_sysreg(val, cnthctl_el2);
-}
-
-/*
- * Should only be called on non-VHE systems.
- * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe().
- */
-void __timer_enable_traps(struct kvm_vcpu *vcpu)
-{
-	u64 val;
+	if (vcpu->arch.ctxt.is_host) {
+		/* Allow physical timer/counter access for the host */
+		val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN;
+	} else {
+		/*
+		 * Disallow physical timer access for the guest
+		 * Physical counter access is allowed
+		 */
+		val &= ~CNTHCTL_EL1PCEN;
+		val |= CNTHCTL_EL1PCTEN;
+	}
 
-	/*
-	 * Disallow physical timer access for the guest
-	 * Physical counter access is allowed
-	 */
-	val = read_sysreg(cnthctl_el2);
-	val &= ~CNTHCTL_EL1PCEN;
-	val |= CNTHCTL_EL1PCTEN;
 	write_sysreg(val, cnthctl_el2);
 }
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 35/37] KVM: arm64: nVHE: Unify PMU event restoration paths
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (33 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 34/37] KVM: arm64: nVHE: Unify timer restore paths Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 36/37] KVM: arm64: nVHE: Unify GIC PMR " Andrew Scull
  2020-07-15 18:44 ` [PATCH 37/37] KVM: arm64: Separate save and restore of vcpu trap state Andrew Scull
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

Depending on whether the vcpu is the host or not, the appropriate bits
are cleared and set.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 42 ++++++++++++--------------------
 1 file changed, 15 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 05f1cf7ee9e7..d37386b8ded8 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -88,32 +88,22 @@ static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu)
 	}
 }
 
-/**
- * Disable host events, enable guest events
- */
-static void __pmu_switch_to_guest(void)
+static void __pmu_restore(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu_events *pmu = __hyp_this_cpu_ptr(kvm_pmu_events);
+	u32 clr;
+	u32 set;
+
+	if (vcpu->arch.ctxt.is_host) {
+		clr = pmu->events_guest;
+		set = pmu->events_host;
+	} else {
+		clr = pmu->events_host;
+		set = pmu->events_guest;
+	}
 
-	if (pmu->events_host)
-		write_sysreg(pmu->events_host, pmcntenclr_el0);
-
-	if (pmu->events_guest)
-		write_sysreg(pmu->events_guest, pmcntenset_el0);
-}
-
-/**
- * Disable guest events, enable host events
- */
-static void __pmu_switch_to_host(void)
-{
-	struct kvm_pmu_events *pmu = __hyp_this_cpu_ptr(kvm_pmu_events);
-
-	if (pmu->events_guest)
-		write_sysreg(pmu->events_guest, pmcntenclr_el0);
-
-	if (pmu->events_host)
-		write_sysreg(pmu->events_host, pmcntenset_el0);
+	write_sysreg(clr, pmcntenclr_el0);
+	write_sysreg(set, pmcntenset_el0);
 }
 
 static void __kvm_vcpu_switch_to_guest(struct kvm_vcpu *host_vcpu,
@@ -129,15 +119,11 @@ static void __kvm_vcpu_switch_to_guest(struct kvm_vcpu *host_vcpu,
 		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
 		pmr_sync();
 	}
-
-	__pmu_switch_to_guest();
 }
 
 static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
 				      struct kvm_vcpu *vcpu)
 {
-	__pmu_switch_to_host();
-
 	/* Returning to host will clear PSR.I, remask PMR if needed */
 	if (system_uses_irq_prio_masking())
 		gic_write_pmr(GIC_PRIO_IRQOFF);
@@ -219,6 +205,8 @@ static void __vcpu_restore_state(struct kvm_vcpu *vcpu, bool restore_debug)
 		__debug_restore_state(kern_hyp_va(vcpu->arch.debug_ptr),
 				      &vcpu->arch.ctxt);
 
+	__pmu_restore(vcpu);
+
 	*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = vcpu;
 }
 
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 36/37] KVM: arm64: nVHE: Unify GIC PMR restoration paths
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (34 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 35/37] KVM: arm64: nVHE: Unify PMU event restoration paths Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  2020-07-15 18:44 ` [PATCH 37/37] KVM: arm64: Separate save and restore of vcpu trap state Andrew Scull
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

Different actions are taken depending on whether the vcpu is the host or
not.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 50 ++++++++++++++------------------
 1 file changed, 22 insertions(+), 28 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index d37386b8ded8..260c5cbb6717 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -88,6 +88,26 @@ static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu)
 	}
 }
 
+void __hyp_gic_pmr_restore(struct kvm_vcpu *vcpu)
+{
+	if (!system_uses_irq_prio_masking())
+		return;
+
+	if (vcpu->arch.ctxt.is_host) {
+		/* Returning to host will clear PSR.I, remask PMR if needed */
+		gic_write_pmr(GIC_PRIO_IRQOFF);
+	} else {
+		/*
+		 * Having IRQs masked via PMR when entering the guest means the
+		 * GIC will not signal the CPU of interrupts of lower priority,
+		 * and the only way to get out will be via guest exceptions.
+		 * Naturally, we want to avoid this.
+		 */
+		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+		pmr_sync();
+	}
+}
+
 static void __pmu_restore(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu_events *pmu = __hyp_this_cpu_ptr(kvm_pmu_events);
@@ -106,29 +126,6 @@ static void __pmu_restore(struct kvm_vcpu *vcpu)
 	write_sysreg(set, pmcntenset_el0);
 }
 
-static void __kvm_vcpu_switch_to_guest(struct kvm_vcpu *host_vcpu,
-				       struct kvm_vcpu *vcpu)
-{
-	/*
-	 * Having IRQs masked via PMR when entering the guest means the GIC
-	 * will not signal the CPU of interrupts of lower priority, and the
-	 * only way to get out will be via guest exceptions.
-	 * Naturally, we want to avoid this.
-	 */
-	if (system_uses_irq_prio_masking()) {
-		gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
-		pmr_sync();
-	}
-}
-
-static void __kvm_vcpu_switch_to_host(struct kvm_vcpu *host_vcpu,
-				      struct kvm_vcpu *vcpu)
-{
-	/* Returning to host will clear PSR.I, remask PMR if needed */
-	if (system_uses_irq_prio_masking())
-		gic_write_pmr(GIC_PRIO_IRQOFF);
-}
-
 static void __vcpu_save_state(struct kvm_vcpu *vcpu, bool save_debug)
 {
 	__sysreg_save_state_nvhe(&vcpu->arch.ctxt);
@@ -154,11 +151,6 @@ static void __vcpu_restore_state(struct kvm_vcpu *vcpu, bool restore_debug)
 	 */
 	running_vcpu = __hyp_this_cpu_read(kvm_hyp_running_vcpu);
 
-	if (vcpu->arch.ctxt.is_host)
-		__kvm_vcpu_switch_to_host(vcpu, running_vcpu);
-	else
-		__kvm_vcpu_switch_to_guest(running_vcpu, vcpu);
-
 	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
 		u64 val;
 
@@ -207,6 +199,8 @@ static void __vcpu_restore_state(struct kvm_vcpu *vcpu, bool restore_debug)
 
 	__pmu_restore(vcpu);
 
+	__hyp_gic_pmr_restore(vcpu);
+
 	*__hyp_this_cpu_ptr(kvm_hyp_running_vcpu) = vcpu;
 }
 
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 37/37] KVM: arm64: Separate save and restore of vcpu trap state
  2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
                   ` (35 preceding siblings ...)
  2020-07-15 18:44 ` [PATCH 36/37] KVM: arm64: nVHE: Unify GIC PMR " Andrew Scull
@ 2020-07-15 18:44 ` Andrew Scull
  36 siblings, 0 replies; 45+ messages in thread
From: Andrew Scull @ 2020-07-15 18:44 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team

Pending virtual interrupts are saved in the saving phase and restoration
enables or disables traps depending on whether or not the host is being
restored.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h |  2 +-
 arch/arm64/kvm/hyp/nvhe/switch.c        | 28 +++++++++++--------------
 arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
 3 files changed, 14 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 84fd6b0601b2..65a29d029c53 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -113,7 +113,7 @@ static inline void ___activate_traps(struct kvm_vcpu *vcpu)
 		write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2);
 }
 
-static inline void ___deactivate_traps(struct kvm_vcpu *vcpu)
+static inline void __save_traps(struct kvm_vcpu *vcpu)
 {
 	/*
 	 * If we pended a virtual abort, preserve it until it gets
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 260c5cbb6717..0f7670dabf50 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -45,10 +45,8 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 	write_sysreg(val, cptr_el2);
 }
 
-static void __deactivate_traps(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu)
+static void __deactivate_traps(struct kvm_vcpu *host_vcpu)
 {
-	___deactivate_traps(vcpu);
-
 	__deactivate_traps_common();
 
 	write_sysreg(host_vcpu->arch.mdcr_el2, mdcr_el2);
@@ -56,6 +54,14 @@ static void __deactivate_traps(struct kvm_vcpu *host_vcpu, struct kvm_vcpu *vcpu
 	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
 }
 
+static void __restore_traps(struct kvm_vcpu *vcpu)
+{
+	if (vcpu->arch.ctxt.is_host)
+		__deactivate_traps(vcpu);
+	else
+		__activate_traps(vcpu);
+}
+
 static void __restore_stage2(struct kvm_vcpu *vcpu)
 {
 	if (vcpu->arch.hcr_el2 & HCR_VM)
@@ -134,6 +140,7 @@ static void __vcpu_save_state(struct kvm_vcpu *vcpu, bool save_debug)
 
 	__fpsimd_save_fpexc32(vcpu);
 
+	__save_traps(vcpu);
 	__debug_save_spe(vcpu);
 
 	if (save_debug)
@@ -143,14 +150,6 @@ static void __vcpu_save_state(struct kvm_vcpu *vcpu, bool save_debug)
 
 static void __vcpu_restore_state(struct kvm_vcpu *vcpu, bool restore_debug)
 {
-	struct kvm_vcpu *running_vcpu;
-
-	/*
-	 * Restoration is not yet pure so it still makes use of the previously
-	 * running vcpu.
-	 */
-	running_vcpu = __hyp_this_cpu_read(kvm_hyp_running_vcpu);
-
 	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
 		u64 val;
 
@@ -179,10 +178,7 @@ static void __vcpu_restore_state(struct kvm_vcpu *vcpu, bool restore_debug)
 	__sysreg32_restore_state(vcpu);
 	__sysreg_restore_state_nvhe(&vcpu->arch.ctxt);
 
-	if (vcpu->arch.ctxt.is_host)
-		__deactivate_traps(vcpu, running_vcpu);
-	else
-		__activate_traps(vcpu);
+	__restore_traps(vcpu);
 
 	__hyp_vgic_restore_state(vcpu);
 	__timer_restore_traps(vcpu);
@@ -260,7 +256,7 @@ void __noreturn hyp_panic(void)
 
 	if (vcpu != host_vcpu) {
 		__timer_restore_traps(host_vcpu);
-		__deactivate_traps(host_vcpu, vcpu);
+		__restore_traps(host_vcpu);
 		__restore_stage2(host_vcpu);
 		__sysreg_restore_state_nvhe(&host_vcpu->arch.ctxt);
 	}
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index bc372629e1c1..bc5939581f61 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -67,7 +67,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 {
 	extern char vectors[];	/* kernel exception vectors */
 
-	___deactivate_traps(vcpu);
+	__save_traps(vcpu);
 
 	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
 
-- 
2.27.0.389.gc38d7665816-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [PATCH 06/37] KVM: arm64: Only check pending interrupts if it would trap
  2020-07-15 18:44 ` [PATCH 06/37] KVM: arm64: Only check pending interrupts if it would trap Andrew Scull
@ 2020-07-17 16:21   ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-07-17 16:21 UTC (permalink / raw)
  To: Andrew Scull; +Cc: kernel-team, kvmarm

Hi Andrew,

On Wed, 15 Jul 2020 19:44:07 +0100,
Andrew Scull <ascull@google.com> wrote:
> 
> Allow entry to a vcpu that can handle interrupts if there is an
> interrupts pending. Entry will still be aborted if the vcpu cannot
> handle interrupts.

This is pretty confusing. All vcpus can handle interrupts, it's just
that there are multiple classes of interrupts (physical or virtual).
Instead, this should outline *where* physical interrupt are taken.

Something like:

  When entering a vcpu for which physical interrupts are not taken to
  EL2, don't bother evaluating ISR_EL1 to work out whether we should
  go back to EL2 early. Instead, just enter the guest without any
  further ado. This is done by checking HCR_EL2.IMO bit.

> 
> This allows use of __guest_enter to enter into the host.
> 
> Signed-off-by: Andrew Scull <ascull@google.com>
> ---
>  arch/arm64/kvm/hyp/entry.S | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> index ee32a7743389..6a641fcff4f7 100644
> --- a/arch/arm64/kvm/hyp/entry.S
> +++ b/arch/arm64/kvm/hyp/entry.S
> @@ -73,13 +73,17 @@ SYM_FUNC_START(__guest_enter)
>  	save_sp_el0	x1, x2
>  
>  	// Now the host state is stored if we have a pending RAS SError it must
> -	// affect the host. If any asynchronous exception is pending we defer
> -	// the guest entry. The DSB isn't necessary before v8.2 as any SError
> -	// would be fatal.
> +	// affect the host. If physical IRQ interrupts are going to be trapped
> +	// and there are already asynchronous exceptions pending then we defer
> +	// the entry. The DSB isn't necessary before v8.2 as any SError would
> +	// be fatal.
>  alternative_if ARM64_HAS_RAS_EXTN
>  	dsb	nshst
>  	isb
>  alternative_else_nop_endif
> +	mrs	x1, hcr_el2
> +	and	x1, x1, #HCR_IMO
> +	cbz	x1, 1f

Do we really want to take the overhead of the above DSB/ISB when on
the host? We're not even evaluating ISR_EL1, so what is the gain?

This also assumes that IMO/FMO/AMO are all set together, which
deserves to be documented.

Another thing is that you are also restoring registers that the host
vcpu expects to be corrupted (the caller-saved registers, X0-17).
You probably should just zero them instead if leaking data from EL2 is
your concern. Yes, this is a departure from SMCCC 1.1, but I think
this is a valid one, as EL2 isn't a fully independent piece of SW.
Same goes on the __guest_exit() path. PtrAuth is another concern (I'm
pretty sure this doesn't do what we want, but I haven't tried on a model).

I've hacked the following patch together, which allowed me to claw
back about 10% of the performance loss. I'm pretty sure there are
similar places where you have introduced extra overhead, and we should
hunt them down.

Thanks,

	M.

diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 6c3a6b27a96c..2d1a71bd7baa 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -33,6 +33,10 @@ SYM_FUNC_START(__guest_enter)
 	// Save the hyp's sp_el0
 	save_sp_el0	x1, x2
 
+	mrs	x1, hcr_el2
+	and	x1, x1, #HCR_IMO
+	cbz	x1, 2f
+
 	// Now the hyp state is stored if we have a pending RAS SError it must
 	// affect the hyp. If physical IRQ interrupts are going to be trapped
 	// and there are already asynchronous exceptions pending then we defer
@@ -42,9 +46,6 @@ alternative_if ARM64_HAS_RAS_EXTN
 	dsb	nshst
 	isb
 alternative_else_nop_endif
-	mrs	x1, hcr_el2
-	and	x1, x1, #HCR_IMO
-	cbz	x1, 1f
 	mrs	x1, isr_el1
 	cbz	x1,  1f
 	mov	x0, #ARM_EXCEPTION_IRQ
@@ -81,6 +82,31 @@ alternative_else_nop_endif
 	eret
 	sb
 
+2:
+	add	x29, x0, #VCPU_CONTEXT
+
+	// Macro ptrauth_switch_to_guest format:
+	// 	ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3)
+	// The below macro to restore guest keys is not implemented in C code
+	// as it may cause Pointer Authentication key signing mismatch errors
+	// when this feature is enabled for kernel code.
+	ptrauth_switch_to_guest x29, x0, x1, x2
+
+	// Restore the guest's sp_el0
+	restore_sp_el0 x29, x0
+
+	.irp n,4,5,6,7,8,9,10,11,12,13,14,15,16,17
+	mov	x\n, xzr
+	.endr
+
+	ldp	x0, x1,   [x29, #CPU_XREG_OFFSET(0)]
+	ldp	x2, x3,   [x29, #CPU_XREG_OFFSET(2)]
+
+	// Restore guest regs x18-x29, lr
+	restore_callee_saved_regs x29
+	eret
+	sb
+
 SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
 	// x0: return code
 	// x1: vcpu
@@ -99,6 +125,11 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
 
 	// Store the guest regs x0-x1 and x4-x17
 	stp	x2, x3,   [x1, #CPU_XREG_OFFSET(0)]
+
+	mrs	x2, hcr_el2
+	and	x2, x2, #HCR_IMO
+	cbz	x2, 1f
+
 	stp	x4, x5,   [x1, #CPU_XREG_OFFSET(4)]
 	stp	x6, x7,   [x1, #CPU_XREG_OFFSET(6)]
 	stp	x8, x9,   [x1, #CPU_XREG_OFFSET(8)]
@@ -107,6 +138,7 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
 	stp	x14, x15, [x1, #CPU_XREG_OFFSET(14)]
 	stp	x16, x17, [x1, #CPU_XREG_OFFSET(16)]
 
+1:
 	// Store the guest regs x18-x29, lr
 	save_callee_saved_regs x1
 

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism
  2020-07-15 18:44 ` [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism Andrew Scull
@ 2020-07-18  9:00   ` Marc Zyngier
  2020-07-20 14:13     ` Andrew Scull
  2020-07-20 15:40   ` Andrew Scull
  1 sibling, 1 reply; 45+ messages in thread
From: Marc Zyngier @ 2020-07-18  9:00 UTC (permalink / raw)
  To: Andrew Scull; +Cc: kernel-team, kvmarm

Hi Andrew,

On Wed, 15 Jul 2020 19:44:08 +0100,
Andrew Scull <ascull@google.com> wrote:
> 
> When exiting a guest, just check whether there is an SError pending and
> set the bit in the exit code. The fixup then initiates the ceremony
> should it be required.
> 
> The separation allows for easier choices to be made as to whether the
> demonic consultation should proceed.

Such as?

> 
> Signed-off-by: Andrew Scull <ascull@google.com>
> ---
>  arch/arm64/include/asm/kvm_hyp.h        |  2 ++
>  arch/arm64/kvm/hyp/entry.S              | 27 +++++++++++++++++--------
>  arch/arm64/kvm/hyp/hyp-entry.S          |  1 -
>  arch/arm64/kvm/hyp/include/hyp/switch.h |  4 ++++
>  4 files changed, 25 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index 07745d9c49fc..50a774812761 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -91,6 +91,8 @@ void deactivate_traps_vhe_put(void);
>  
>  u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
>  
> +void __vaxorcize_serror(void);

I think a VAXorsist reference in the comments is plenty. The code can
definitely stay architectural. Something like "__handle_SEI()" should
be good. I'm not *that* fun.

> +
>  void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt);
>  #ifdef __KVM_NVHE_HYPERVISOR__
>  void __noreturn __hyp_do_panic(unsigned long, ...);
> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> index 6a641fcff4f7..dc4e3e7e7407 100644
> --- a/arch/arm64/kvm/hyp/entry.S
> +++ b/arch/arm64/kvm/hyp/entry.S
> @@ -174,18 +174,31 @@ alternative_if ARM64_HAS_RAS_EXTN
>  	mrs_s	x2, SYS_DISR_EL1
>  	str	x2, [x1, #(VCPU_FAULT_DISR - VCPU_CONTEXT)]
>  	cbz	x2, 1f
> -	msr_s	SYS_DISR_EL1, xzr
>  	orr	x0, x0, #(1<<ARM_EXIT_WITH_SERROR_BIT)
> -1:	ret
> +	nop
> +1:
>  alternative_else
>  	dsb	sy		// Synchronize against in-flight ld/st
>  	isb			// Prevent an early read of side-effect free ISR
>  	mrs	x2, isr_el1
> -	tbnz	x2, #8, 2f	// ISR_EL1.A
> -	ret
> -	nop
> +	tbz	x2, #8, 2f	// ISR_EL1.A
> +	orr	x0, x0, #(1<<ARM_EXIT_WITH_SERROR_BIT)
>  2:
>  alternative_endif
> +
> +	ret
> +SYM_FUNC_END(__guest_enter)
> +
> +/*
> + * void __vaxorcize_serror(void);
> + */
> +SYM_FUNC_START(__vaxorcize_serror)
> +
> +alternative_if ARM64_HAS_RAS_EXTN
> +	msr_s	SYS_DISR_EL1, xzr
> +	ret
> +alternative_else_nop_endif
> +
>  	// We know we have a pending asynchronous abort, now is the
>  	// time to flush it out. From your VAXorcist book, page 666:
>  	// "Threaten me not, oh Evil one!  For I speak with
> @@ -193,7 +206,6 @@ alternative_endif
>  	mrs	x2, elr_el2
>  	mrs	x3, esr_el2
>  	mrs	x4, spsr_el2
> -	mov	x5, x0
>  
>  	msr	daifclr, #4	// Unmask aborts
>  
> @@ -217,6 +229,5 @@ abort_guest_exit_end:
>  	msr	elr_el2, x2
>  	msr	esr_el2, x3
>  	msr	spsr_el2, x4
> -	orr	x0, x0, x5
>  1:	ret
> -SYM_FUNC_END(__guest_enter)
> +SYM_FUNC_END(__vaxorcize_serror)
> diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
> index e727bee8e110..c441aabb8ab0 100644
> --- a/arch/arm64/kvm/hyp/hyp-entry.S
> +++ b/arch/arm64/kvm/hyp/hyp-entry.S
> @@ -177,7 +177,6 @@ el2_error:
>  	adr	x1, abort_guest_exit_end
>  	ccmp	x0, x1, #4, ne
>  	b.ne	__hyp_panic
> -	mov	x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
>  	eret
>  	sb
>  
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 0511af14dc81..14a774d1a35a 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -405,6 +405,10 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
>   */
>  static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>  {
> +	/* Flush guest SErrors. */
> +	if (ARM_SERROR_PENDING(*exit_code))
> +		__vaxorcize_serror();
> +
>  	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
>  		vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
>  
> -- 
> 2.27.0.389.gc38d7665816-goog
> 
> 

I'm not against this patch, but I wonder whether it actually helps
with anything. It spreads the handling across multiple paths, making
it harder to read. Could you explain the rational beyond "it's
easier"?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism
  2020-07-18  9:00   ` Marc Zyngier
@ 2020-07-20 14:13     ` Andrew Scull
  2020-07-20 14:56       ` Marc Zyngier
  0 siblings, 1 reply; 45+ messages in thread
From: Andrew Scull @ 2020-07-20 14:13 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kernel-team, kvmarm

On Sat, Jul 18, 2020 at 10:00:30AM +0100, Marc Zyngier wrote:
> Hi Andrew,
> 
> On Wed, 15 Jul 2020 19:44:08 +0100,
> Andrew Scull <ascull@google.com> wrote:
> > 
> > When exiting a guest, just check whether there is an SError pending and
> > set the bit in the exit code. The fixup then initiates the ceremony
> > should it be required.
> > 
> > The separation allows for easier choices to be made as to whether the
> > demonic consultation should proceed.
> 
> Such as?

It's used in the next patch to keep host SErrors pending and left for
the host to handle when reentering the host vcpu. IIUC, this matches the
previous behaviour since hyp would mask SErrors.

We wanted to avoid the need to convert host SErrors into virtual ones
and I opted for this approach to keep as much of the logic/policy as
possible in C.

Let me know if you'd prefer a different split of the patches or there
should be different design goals.

> > 
> > Signed-off-by: Andrew Scull <ascull@google.com>
> > ---
> >  arch/arm64/include/asm/kvm_hyp.h        |  2 ++
> >  arch/arm64/kvm/hyp/entry.S              | 27 +++++++++++++++++--------
> >  arch/arm64/kvm/hyp/hyp-entry.S          |  1 -
> >  arch/arm64/kvm/hyp/include/hyp/switch.h |  4 ++++
> >  4 files changed, 25 insertions(+), 9 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> > index 07745d9c49fc..50a774812761 100644
> > --- a/arch/arm64/include/asm/kvm_hyp.h
> > +++ b/arch/arm64/include/asm/kvm_hyp.h
> > @@ -91,6 +91,8 @@ void deactivate_traps_vhe_put(void);
> >  
> >  u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
> >  
> > +void __vaxorcize_serror(void);
> 
> I think a VAXorsist reference in the comments is plenty. The code can
> definitely stay architectural. Something like "__handle_SEI()" should
> be good. I'm not *that* fun.

Fine... ;)

> > +
> >  void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt);
> >  #ifdef __KVM_NVHE_HYPERVISOR__
> >  void __noreturn __hyp_do_panic(unsigned long, ...);
> > diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> > index 6a641fcff4f7..dc4e3e7e7407 100644
> > --- a/arch/arm64/kvm/hyp/entry.S
> > +++ b/arch/arm64/kvm/hyp/entry.S
> > @@ -174,18 +174,31 @@ alternative_if ARM64_HAS_RAS_EXTN
> >  	mrs_s	x2, SYS_DISR_EL1
> >  	str	x2, [x1, #(VCPU_FAULT_DISR - VCPU_CONTEXT)]
> >  	cbz	x2, 1f
> > -	msr_s	SYS_DISR_EL1, xzr
> >  	orr	x0, x0, #(1<<ARM_EXIT_WITH_SERROR_BIT)
> > -1:	ret
> > +	nop
> > +1:
> >  alternative_else
> >  	dsb	sy		// Synchronize against in-flight ld/st
> >  	isb			// Prevent an early read of side-effect free ISR
> >  	mrs	x2, isr_el1
> > -	tbnz	x2, #8, 2f	// ISR_EL1.A
> > -	ret
> > -	nop
> > +	tbz	x2, #8, 2f	// ISR_EL1.A
> > +	orr	x0, x0, #(1<<ARM_EXIT_WITH_SERROR_BIT)
> >  2:
> >  alternative_endif
> > +
> > +	ret
> > +SYM_FUNC_END(__guest_enter)
> > +
> > +/*
> > + * void __vaxorcize_serror(void);
> > + */
> > +SYM_FUNC_START(__vaxorcize_serror)
> > +
> > +alternative_if ARM64_HAS_RAS_EXTN
> > +	msr_s	SYS_DISR_EL1, xzr
> > +	ret
> > +alternative_else_nop_endif
> > +
> >  	// We know we have a pending asynchronous abort, now is the
> >  	// time to flush it out. From your VAXorcist book, page 666:
> >  	// "Threaten me not, oh Evil one!  For I speak with
> > @@ -193,7 +206,6 @@ alternative_endif
> >  	mrs	x2, elr_el2
> >  	mrs	x3, esr_el2
> >  	mrs	x4, spsr_el2
> > -	mov	x5, x0
> >  
> >  	msr	daifclr, #4	// Unmask aborts
> >  
> > @@ -217,6 +229,5 @@ abort_guest_exit_end:
> >  	msr	elr_el2, x2
> >  	msr	esr_el2, x3
> >  	msr	spsr_el2, x4
> > -	orr	x0, x0, x5
> >  1:	ret
> > -SYM_FUNC_END(__guest_enter)
> > +SYM_FUNC_END(__vaxorcize_serror)
> > diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
> > index e727bee8e110..c441aabb8ab0 100644
> > --- a/arch/arm64/kvm/hyp/hyp-entry.S
> > +++ b/arch/arm64/kvm/hyp/hyp-entry.S
> > @@ -177,7 +177,6 @@ el2_error:
> >  	adr	x1, abort_guest_exit_end
> >  	ccmp	x0, x1, #4, ne
> >  	b.ne	__hyp_panic
> > -	mov	x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
> >  	eret
> >  	sb
> >  
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 0511af14dc81..14a774d1a35a 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -405,6 +405,10 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
> >   */
> >  static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
> >  {
> > +	/* Flush guest SErrors. */
> > +	if (ARM_SERROR_PENDING(*exit_code))
> > +		__vaxorcize_serror();
> > +
> >  	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
> >  		vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
> >  
> > -- 
> > 2.27.0.389.gc38d7665816-goog
> > 
> > 
> 
> I'm not against this patch, but I wonder whether it actually helps
> with anything. It spreads the handling across multiple paths, making
> it harder to read. Could you explain the rational beyond "it's
> easier"?

The earlier reply should cover most of this. I claim it's easier to make
choices as the predicate isn't stuck in assembly. Maybe others feel
differently and I should use less provocative language.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism
  2020-07-20 14:13     ` Andrew Scull
@ 2020-07-20 14:56       ` Marc Zyngier
  2020-07-23  0:59         ` FW: " Renters Cancellation Requests
  0 siblings, 1 reply; 45+ messages in thread
From: Marc Zyngier @ 2020-07-20 14:56 UTC (permalink / raw)
  To: Andrew Scull; +Cc: kernel-team, kvmarm

On 2020-07-20 15:13, Andrew Scull wrote:
> On Sat, Jul 18, 2020 at 10:00:30AM +0100, Marc Zyngier wrote:
>> Hi Andrew,
>> 
>> On Wed, 15 Jul 2020 19:44:08 +0100,
>> Andrew Scull <ascull@google.com> wrote:
>> >
>> > When exiting a guest, just check whether there is an SError pending and
>> > set the bit in the exit code. The fixup then initiates the ceremony
>> > should it be required.
>> >
>> > The separation allows for easier choices to be made as to whether the
>> > demonic consultation should proceed.
>> 
>> Such as?
> 
> It's used in the next patch to keep host SErrors pending and left for
> the host to handle when reentering the host vcpu. IIUC, this matches 
> the
> previous behaviour since hyp would mask SErrors.
> 
> We wanted to avoid the need to convert host SErrors into virtual ones
> and I opted for this approach to keep as much of the logic/policy as
> possible in C.

Right. That's the kind of information you should put in your commit
message, as it makes your intent much clearer.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism
  2020-07-15 18:44 ` [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism Andrew Scull
  2020-07-18  9:00   ` Marc Zyngier
@ 2020-07-20 15:40   ` Andrew Scull
  2020-07-20 15:57     ` Marc Zyngier
  1 sibling, 1 reply; 45+ messages in thread
From: Andrew Scull @ 2020-07-20 15:40 UTC (permalink / raw)
  To: kvmarm; +Cc: maz

On Wed, Jul 15, 2020 at 07:44:08PM +0100, Andrew Scull wrote:
> diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
> index e727bee8e110..c441aabb8ab0 100644
> --- a/arch/arm64/kvm/hyp/hyp-entry.S
> +++ b/arch/arm64/kvm/hyp/hyp-entry.S
> @@ -177,7 +177,6 @@ el2_error:
>  	adr	x1, abort_guest_exit_end
>  	ccmp	x0, x1, #4, ne
>  	b.ne	__hyp_panic
> -	mov	x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
>  	eret
>  	sb

Having looked at this again, I also meant to remove the hunk below. It
used to check that the SError had actually been taken through the
exception vector but I am removing that.

However, do I need to continue doing that check to make sure the SError
didn't get cancelled (if that is possible) or some other architectural
phenomenon happened that I haven't factored in to my thinking?

--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -210,13 +222,8 @@ abort_guest_exit_end:
 
         msr     daifset, #4     // Mask aborts
	  
-       // If the exception took place, restore the EL1 exception
-       // context so that we can report some information.
-       // Merge the exception code with the SError pending bit.
-       tbz     x0, #ARM_EXIT_WITH_SERROR_BIT, 1f
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism
  2020-07-20 15:40   ` Andrew Scull
@ 2020-07-20 15:57     ` Marc Zyngier
  0 siblings, 0 replies; 45+ messages in thread
From: Marc Zyngier @ 2020-07-20 15:57 UTC (permalink / raw)
  To: Andrew Scull; +Cc: kvmarm

On 2020-07-20 16:40, Andrew Scull wrote:
> On Wed, Jul 15, 2020 at 07:44:08PM +0100, Andrew Scull wrote:
>> diff --git a/arch/arm64/kvm/hyp/hyp-entry.S 
>> b/arch/arm64/kvm/hyp/hyp-entry.S
>> index e727bee8e110..c441aabb8ab0 100644
>> --- a/arch/arm64/kvm/hyp/hyp-entry.S
>> +++ b/arch/arm64/kvm/hyp/hyp-entry.S
>> @@ -177,7 +177,6 @@ el2_error:
>>  	adr	x1, abort_guest_exit_end
>>  	ccmp	x0, x1, #4, ne
>>  	b.ne	__hyp_panic
>> -	mov	x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
>>  	eret
>>  	sb
> 
> Having looked at this again, I also meant to remove the hunk below. It
> used to check that the SError had actually been taken through the
> exception vector but I am removing that.
> 
> However, do I need to continue doing that check to make sure the SError
> didn't get cancelled (if that is possible) or some other architectural
> phenomenon happened that I haven't factored in to my thinking?
> 
> --- a/arch/arm64/kvm/hyp/entry.S
> +++ b/arch/arm64/kvm/hyp/entry.S
> @@ -210,13 +222,8 @@ abort_guest_exit_end:
> 
>          msr     daifset, #4     // Mask aborts
> 	
> -       // If the exception took place, restore the EL1 exception
> -       // context so that we can report some information.
> -       // Merge the exception code with the SError pending bit.
> -       tbz     x0, #ARM_EXIT_WITH_SERROR_BIT, 1f

You have now inverted the logic: you detect the SError before
handling it, and set the flag accordingly. Which means that:

- you always enter this function having set ARM_EXIT_WITH_SERROR_BIT
   in the error code (which isn't passed to this function).

- you always take an exception, because you *know* there is a pending
   SError. Taking the exception on a non-RAS systems consumes it.

- If there is another SError pending after that, something bad happened
   at EL2, and we'll explode later on (on return to EL1).

So I think removing this hunk is the right thing to do.

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

* FW: Re: [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism
  2020-07-20 14:56       ` Marc Zyngier
@ 2020-07-23  0:59         ` Renters Cancellation Requests
  0 siblings, 0 replies; 45+ messages in thread
From: Renters Cancellation Requests @ 2020-07-23  0:59 UTC (permalink / raw)
  To: Marc Zyngier, Andrew Scull; +Cc: kernel-team, kvmarm


[-- Attachment #1.1: Type: text/plain, Size: 2729 bytes --]

Dear Valued Customer,


Thank you for your inquiry. Please let us know how we may assist you.


If you have a Renter’s policy, you can manage your policy online 24/7 at: https://www.myassurantpolicy.com/

You have access to a range of service options including:

  *
View/update policy information
  *
Manage your payments
  *
Obtain proof of insurance
  *
And much more



Thank you for allowing us the opportunity to serve you.


Sincerely,

Insurance Services

Assurant - Global Specialty Operations




------------------- Original Message -------------------
From: Marc Zyngier
Received: Mon Jul 20 2020 10:57:01 GMT-0400 (Eastern Daylight Time)
To: Andrew Scull
Cc: kernel-team@android.com; kvmarm@lists.cs.columbia.edu
Subject: Re: [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism

On 2020-07-20 15:13, Andrew Scull wrote:
> On Sat, Jul 18, 2020 at 10:00:30AM +0100, Marc Zyngier wrote:
>> Hi Andrew,
>>
>> On Wed, 15 Jul 2020 19:44:08 +0100,
>> Andrew Scull <ascull@google.com> wrote:
>> >
>> > When exiting a guest, just check whether there is an SError pending and
>> > set the bit in the exit code. The fixup then initiates the ceremony
>> > should it be required.
>> >
>> > The separation allows for easier choices to be made as to whether the
>> > demonic consultation should proceed.
>>
>> Such as?
>
> It's used in the next patch to keep host SErrors pending and left for
> the host to handle when reentering the host vcpu. IIUC, this matches
> the
> previous behaviour since hyp would mask SErrors.
>
> We wanted to avoid the need to convert host SErrors into virtual ones
> and I opted for this approach to keep as much of the logic/policy as
> possible in C.

Right. That's the kind of information you should put in your commit
message, as it makes your intent much clearer.

Thanks,

         M.
--
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

**********************************************************************
This e-mail message and all attachments transmitted with it may contain legally privileged and/or confidential information intended solely for the use of the addressee(s). If the reader of this message is not the intended recipient, you are hereby notified that any reading, dissemination, distribution, copying, forwarding or other use of this message or its attachments is strictly prohibited. If you have received this message in error, please notify the sender immediately and delete this message and all copies and backups thereof. Thank you.

[-- Attachment #1.2: Type: text/html, Size: 5730 bytes --]

[-- Attachment #2: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2020-07-23  0:59 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-15 18:44 [PATCH 00/37] Transform the host into a vCPU Andrew Scull
2020-07-15 18:44 ` [PATCH 01/37] smccc: Make constants available to assembly Andrew Scull
2020-07-15 18:44 ` [PATCH 02/37] KVM: arm64: Move clearing of vcpu debug dirty bit Andrew Scull
2020-07-15 18:44 ` [PATCH 03/37] KVM: arm64: Track running vCPU outside of the CPU context Andrew Scull
2020-07-15 18:44 ` [PATCH 04/37] KVM: arm64: nVHE: Pass pointers consistently to hyp-init Andrew Scull
2020-07-15 18:44 ` [PATCH 05/37] KVM: arm64: nVHE: Break out of the hyp-init idmap Andrew Scull
2020-07-15 18:44 ` [PATCH 06/37] KVM: arm64: Only check pending interrupts if it would trap Andrew Scull
2020-07-17 16:21   ` Marc Zyngier
2020-07-15 18:44 ` [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism Andrew Scull
2020-07-18  9:00   ` Marc Zyngier
2020-07-20 14:13     ` Andrew Scull
2020-07-20 14:56       ` Marc Zyngier
2020-07-23  0:59         ` FW: " Renters Cancellation Requests
2020-07-20 15:40   ` Andrew Scull
2020-07-20 15:57     ` Marc Zyngier
2020-07-15 18:44 ` [PATCH 08/37] KVM: arm64: nVHE: Introduce a hyp run loop for the host Andrew Scull
2020-07-15 18:44 ` [PATCH 09/37] smccc: Cast arguments to unsigned long Andrew Scull
2020-07-15 18:44 ` [PATCH 10/37] KVM: arm64: nVHE: Migrate hyp interface to SMCCC Andrew Scull
2020-07-15 18:44 ` [PATCH 11/37] KVM: arm64: nVHE: Migrate hyp-init " Andrew Scull
2020-07-15 18:44 ` [PATCH 12/37] KVM: arm64: nVHE: Fix pointers during SMCCC convertion Andrew Scull
2020-07-15 18:44 ` [PATCH 13/37] KVM: arm64: Rename workaround 2 helpers Andrew Scull
2020-07-15 18:44 ` [PATCH 14/37] KVM: arm64: nVHE: Use __kvm_vcpu_run for the host vcpu Andrew Scull
2020-07-15 18:44 ` [PATCH 15/37] KVM: arm64: Share some context save and restore macros Andrew Scull
2020-07-15 18:44 ` [PATCH 16/37] KVM: arm64: nVHE: Handle stub HVCs in the host loop Andrew Scull
2020-07-15 18:44 ` [PATCH 17/37] KVM: arm64: nVHE: Store host sysregs in host vcpu Andrew Scull
2020-07-15 18:44 ` [PATCH 18/37] KVM: arm64: nVHE: Access pmu_events directly in kvm_host_data Andrew Scull
2020-07-15 18:44 ` [PATCH 19/37] KVM: arm64: nVHE: Drop host_ctxt argument for context switching Andrew Scull
2020-07-15 18:44 ` [PATCH 20/37] KVM: arm64: nVHE: Use host vcpu context for host debug state Andrew Scull
2020-07-15 18:44 ` [PATCH 21/37] KVM: arm64: Move host debug state from vcpu to percpu Andrew Scull
2020-07-15 18:44 ` [PATCH 22/37] KVM: arm64: nVHE: Store host's mdcr_el2 and hcr_el2 in its vcpu Andrew Scull
2020-07-15 18:44 ` [PATCH 23/37] KVM: arm64: Skip __hyp_panic and go direct to hyp_panic Andrew Scull
2020-07-15 18:44 ` [PATCH 24/37] KVM: arm64: Break apart kvm_host_data Andrew Scull
2020-07-15 18:44 ` [PATCH 25/37] KVM: arm64: nVHE: Unify sysreg state saving paths Andrew Scull
2020-07-15 18:44 ` [PATCH 26/37] KVM: arm64: nVHE: Unify 32-bit sysreg " Andrew Scull
2020-07-15 18:44 ` [PATCH 27/37] KVM: arm64: nVHE: Unify vgic save and restore Andrew Scull
2020-07-15 18:44 ` [PATCH 28/37] KVM: arm64: nVHE: Unify fpexc32 saving paths Andrew Scull
2020-07-15 18:44 ` [PATCH 29/37] KVM: arm64: nVHE: Separate the save and restore of debug state Andrew Scull
2020-07-15 18:44 ` [PATCH 30/37] KVM: arm64: nVHE: Remove MMU assumption in speculative AT workaround Andrew Scull
2020-07-15 18:44 ` [PATCH 31/37] KVM: arm64: Move speculative AT ISBs into context Andrew Scull
2020-07-15 18:44 ` [PATCH 32/37] KVM: arm64: nVHE: Unify sysreg state restoration paths Andrew Scull
2020-07-15 18:44 ` [PATCH 33/37] KVM: arm64: Remove __activate_vm wrapper Andrew Scull
2020-07-15 18:44 ` [PATCH 34/37] KVM: arm64: nVHE: Unify timer restore paths Andrew Scull
2020-07-15 18:44 ` [PATCH 35/37] KVM: arm64: nVHE: Unify PMU event restoration paths Andrew Scull
2020-07-15 18:44 ` [PATCH 36/37] KVM: arm64: nVHE: Unify GIC PMR " Andrew Scull
2020-07-15 18:44 ` [PATCH 37/37] KVM: arm64: Separate save and restore of vcpu trap state Andrew Scull

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).