All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/10] FPSIMD restore bypass and protecting
@ 2021-03-04 11:54 Andrew Scull
  2021-03-04 11:54 ` [PATCH 01/10] KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to the host Andrew Scull
                   ` (9 more replies)
  0 siblings, 10 replies; 16+ messages in thread
From: Andrew Scull @ 2021-03-04 11:54 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, maz, catalin.marinas, will, Dave.Martin

This series build towards protecting of FPSIMD state in protected KVM.
Most of the series is refactoring to create separation between host and
hyp for when that is needed.

There was the need to track where a vcpu last loaded its FPSIMD state so
I've also made use of this to avoid needless reloading of vcpu FPSIMD
state when that's possible. I don't know if this makes any performance
difference and don't know what a meaningful benchmark would be to
measure against so help and advice on this front would be appreciated.

The last patch in the series is concerned with the protecting of a
protected VM's FPSIMD state. It will depend on knowing which vcpus are
protected and having a stage 2 over the host but demonstrates the
trapping and lazy switching that I had in mind.

The series has, so far, only been lightly tested on qemu with paranoia
running on the host and in the vcpu.

It applies atop 5.12-rc.

This is similar, but much evolved, version of a series sent out last
year:
https://lore.kernel.org/r/20200713210505.2959828-1-ascull@google.com/

Andrew Scull (10):
  KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to the host
  KVM: arm64: Synchronize vcpu FPSIMD in the host
  KVM: arm64: Unmap host task thread flags from hyp
  KVM: arm64: Support smp_processor_id() in nVHE hyp
  KVM: arm64: Track where vcpu FP state was last loaded
  KVM: arm64: Avoid needlessly reloading guest FP state
  KVM: arm64: Separate host and hyp vcpu FP flags
  KVM: arm64: Pass the arch run struct explicitly
  KVM: arm64: Use hyp-private run struct in protected mode
  RFC: KVM: arm64: Manage FPSIMD state at EL2 for protected vCPUs

 arch/arm64/include/asm/fpsimd.h           |   1 +
 arch/arm64/include/asm/kvm_host.h         |  46 +++++++---
 arch/arm64/include/asm/kvm_hyp.h          |   1 +
 arch/arm64/kernel/fpsimd.c                |  11 ++-
 arch/arm64/kvm/arm.c                      |   8 +-
 arch/arm64/kvm/debug.c                    |   2 +
 arch/arm64/kvm/fpsimd.c                   |  69 +++++++++++----
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h |   2 -
 arch/arm64/kvm/hyp/include/hyp/switch.h   |  57 ++++++------
 arch/arm64/kvm/hyp/nvhe/hyp-main.c        |  24 ++++++
 arch/arm64/kvm/hyp/nvhe/hyp-smp.c         |   2 +
 arch/arm64/kvm/hyp/nvhe/switch.c          | 100 ++++++++++++++++++----
 arch/arm64/kvm/hyp/vhe/switch.c           |   8 +-
 13 files changed, 249 insertions(+), 82 deletions(-)

-- 
2.30.1.766.gb4fecdf3b7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 01/10] KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to the host
  2021-03-04 11:54 [PATCH 00/10] FPSIMD restore bypass and protecting Andrew Scull
@ 2021-03-04 11:54 ` Andrew Scull
  2021-03-04 11:54 ` [PATCH 02/10] KVM: arm64: Synchronize vcpu FPSIMD in " Andrew Scull
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Andrew Scull @ 2021-03-04 11:54 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, maz, catalin.marinas, will, Dave.Martin

Move the clearing of KVM_ARM64_DEBUG_DIRTY from being one of the last
things hyp does before exiting to the host, to being one of the first
things the host does after hyp exits.

This means the host always manages the state of the bit and hyp simply
respects that in the context switch.

No functional change.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_host.h         | 2 +-
 arch/arm64/kvm/debug.c                    | 2 ++
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h | 2 --
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 3d10e6527f7d..6b33f720ce9c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -390,7 +390,7 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
+#define KVM_ARM64_DEBUG_DIRTY		(1 << 0) /* vcpu is using debug */
 #define KVM_ARM64_FP_ENABLED		(1 << 1) /* guest FP regs loaded */
 #define KVM_ARM64_FP_HOST		(1 << 2) /* host FP regs loaded */
 #define KVM_ARM64_HOST_SVE_IN_USE	(1 << 3) /* backup for host TIF_SVE */
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 7a7e425616b5..e9932618a362 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -209,6 +209,8 @@ void kvm_arm_clear_debug(struct kvm_vcpu *vcpu)
 {
 	trace_kvm_arm_clear_debug(vcpu->guest_debug);
 
+	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
+
 	if (vcpu->guest_debug) {
 		restore_guest_debug_regs(vcpu);
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
index 4ebe9f558f3a..344c76a7af35 100644
--- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
@@ -161,8 +161,6 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
 
 	__debug_save_state(guest_dbg, guest_ctxt);
 	__debug_restore_state(host_dbg, host_ctxt);
-
-	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
 }
 
 #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
-- 
2.30.1.766.gb4fecdf3b7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 02/10] KVM: arm64: Synchronize vcpu FPSIMD in the host
  2021-03-04 11:54 [PATCH 00/10] FPSIMD restore bypass and protecting Andrew Scull
  2021-03-04 11:54 ` [PATCH 01/10] KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to the host Andrew Scull
@ 2021-03-04 11:54 ` Andrew Scull
  2021-03-04 11:54 ` [PATCH 03/10] KVM: arm64: Unmap host task thread flags from hyp Andrew Scull
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Andrew Scull @ 2021-03-04 11:54 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, maz, catalin.marinas, will, Dave.Martin

Check the task's state about FP in the host and update the vcpu flags
before calling into hyp. This keeps the synchronization symmetrical
around the call into hyp.

kvm_arch_vcpu_ctxsync_fp() is renamed to kvm_arch_vcpu_sync_fp_after_hyp()
so that its name can pair with the new kvm_arch_vcpu_sync_fp_before_hyp().

If the system doesn't support FPSIMD, avoid setting any of the vcpu's
FPSIMD flags to match the previous behavior.

Signed-off-by: Andrew Scull <ascull@google.com>
Cc: Dave Martin <Dave.Martin@arm.com>
---
 arch/arm64/include/asm/kvm_host.h       |  3 ++-
 arch/arm64/kvm/arm.c                    |  4 +++-
 arch/arm64/kvm/fpsimd.c                 | 26 ++++++++++++++++++++++++-
 arch/arm64/kvm/hyp/include/hyp/switch.h | 19 ------------------
 arch/arm64/kvm/hyp/nvhe/switch.c        |  3 +--
 arch/arm64/kvm/hyp/vhe/switch.c         |  3 +--
 6 files changed, 32 insertions(+), 26 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6b33f720ce9c..f6a478d3a902 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -726,7 +726,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
-void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu);
+void kvm_arch_vcpu_sync_fp_before_hyp(struct kvm_vcpu *vcpu);
+void kvm_arch_vcpu_sync_fp_after_hyp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu);
 
 static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index fc4c95dd2d26..26ccc369cf11 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -738,6 +738,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 
 		local_irq_disable();
 
+		kvm_arch_vcpu_sync_fp_before_hyp(vcpu);
+
 		kvm_vgic_flush_hwstate(vcpu);
 
 		/*
@@ -825,7 +827,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		if (static_branch_unlikely(&userspace_irqchip_in_use))
 			kvm_timer_sync_user(vcpu);
 
-		kvm_arch_vcpu_ctxsync_fp(vcpu);
+		kvm_arch_vcpu_sync_fp_after_hyp(vcpu);
 
 		/*
 		 * We may have taken a host interrupt in HYP mode (ie
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 3e081d556e81..0c5e79be34d5 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -63,8 +63,13 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	BUG_ON(!current->mm);
 
 	vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
+			      KVM_ARM64_FP_HOST |
 			      KVM_ARM64_HOST_SVE_IN_USE |
 			      KVM_ARM64_HOST_SVE_ENABLED);
+
+	if (!system_supports_fpsimd())
+		return;
+
 	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
 
 	if (test_thread_flag(TIF_SVE))
@@ -74,13 +79,32 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
 }
 
+
+/*
+ * If TIF_FOREIGN_FPSTATE is set, the FPSIMD regs do not contain the state of
+ * current or the guest. However, the state will have been saved where it was
+ * needed. This means the guest's state will have to be loaded if it is needed,
+ * without saving the FPSIMD regs.
+ */
+void kvm_arch_vcpu_sync_fp_before_hyp(struct kvm_vcpu *vcpu)
+{
+	WARN_ON_ONCE(!irqs_disabled());
+
+	if (!system_supports_fpsimd())
+		return;
+
+	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
+		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
+				      KVM_ARM64_FP_HOST);
+}
+
 /*
  * If the guest FPSIMD state was loaded, update the host's context
  * tracking data mark the CPU FPSIMD regs as dirty and belonging to vcpu
  * so that they will be written back if the kernel clobbers them due to
  * kernel-mode NEON before re-entry into the guest.
  */
-void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
+void kvm_arch_vcpu_sync_fp_after_hyp(struct kvm_vcpu *vcpu)
 {
 	WARN_ON_ONCE(!irqs_disabled());
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 54f4860cd87c..8eb1f87f9119 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -28,31 +28,12 @@
 #include <asm/fpsimd.h>
 #include <asm/debug-monitors.h>
 #include <asm/processor.h>
-#include <asm/thread_info.h>
 
 extern const char __hyp_panic_string[];
 
 extern struct exception_table_entry __start___kvm_ex_table;
 extern struct exception_table_entry __stop___kvm_ex_table;
 
-/* Check whether the FP regs were dirtied while in the host-side run loop: */
-static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
-{
-	/*
-	 * When the system doesn't support FP/SIMD, we cannot rely on
-	 * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an
-	 * abort on the very first access to FP and thus we should never
-	 * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always
-	 * trap the accesses.
-	 */
-	if (!system_supports_fpsimd() ||
-	    vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE)
-		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
-				      KVM_ARM64_FP_HOST);
-
-	return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
-}
-
 /* Save the 32-bit only FPSIMD system register state */
 static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index f3d0e9eca56c..6fc1e0a5adaa 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -26,7 +26,6 @@
 #include <asm/fpsimd.h>
 #include <asm/debug-monitors.h>
 #include <asm/processor.h>
-#include <asm/thread_info.h>
 
 /* Non-VHE specific context */
 DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data);
@@ -42,7 +41,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val = CPTR_EL2_DEFAULT;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TZ | CPTR_EL2_TAM;
-	if (!update_fp_enabled(vcpu)) {
+	if (!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED)) {
 		val |= CPTR_EL2_TFP;
 		__activate_traps_fpsimd32(vcpu);
 	}
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index af8e940d0f03..f6f60a537b3e 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -25,7 +25,6 @@
 #include <asm/fpsimd.h>
 #include <asm/debug-monitors.h>
 #include <asm/processor.h>
-#include <asm/thread_info.h>
 
 const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n";
 
@@ -55,7 +54,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val |= CPTR_EL2_TAM;
 
-	if (update_fp_enabled(vcpu)) {
+	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
 		if (vcpu_has_sve(vcpu))
 			val |= CPACR_EL1_ZEN;
 	} else {
-- 
2.30.1.766.gb4fecdf3b7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 03/10] KVM: arm64: Unmap host task thread flags from hyp
  2021-03-04 11:54 [PATCH 00/10] FPSIMD restore bypass and protecting Andrew Scull
  2021-03-04 11:54 ` [PATCH 01/10] KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to the host Andrew Scull
  2021-03-04 11:54 ` [PATCH 02/10] KVM: arm64: Synchronize vcpu FPSIMD in " Andrew Scull
@ 2021-03-04 11:54 ` Andrew Scull
  2021-03-04 11:54 ` [PATCH 04/10] KVM: arm64: Support smp_processor_id() in nVHE hyp Andrew Scull
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Andrew Scull @ 2021-03-04 11:54 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, maz, catalin.marinas, will, Dave.Martin

Hyp no longer needs access to the host task's thread flags so remove the
corresponding hyp mapping.

Signed-off-by: Andrew Scull <ascull@google.com>
Cc: Dave Martin <Dave.Martin@arm.com>
---
 arch/arm64/include/asm/kvm_host.h |  2 --
 arch/arm64/kvm/fpsimd.c           | 11 +----------
 2 files changed, 1 insertion(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index f6a478d3a902..8a559fa2f237 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -26,7 +26,6 @@
 #include <asm/fpsimd.h>
 #include <asm/kvm.h>
 #include <asm/kvm_asm.h>
-#include <asm/thread_info.h>
 
 #define __KVM_HAVE_ARCH_INTC_INITIALIZED
 
@@ -307,7 +306,6 @@ struct kvm_vcpu_arch {
 	struct kvm_guest_debug_arch vcpu_debug_state;
 	struct kvm_guest_debug_arch external_debug_state;
 
-	struct thread_info *host_thread_info;	/* hyp VA */
 	struct user_fpsimd_state *host_fpsimd_state;	/* hyp VA */
 
 	struct {
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 0c5e79be34d5..3e5a02137643 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -27,22 +27,13 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
 {
 	int ret;
 
-	struct thread_info *ti = &current->thread_info;
 	struct user_fpsimd_state *fpsimd = &current->thread.uw.fpsimd_state;
 
-	/*
-	 * Make sure the host task thread flags and fpsimd state are
-	 * visible to hyp:
-	 */
-	ret = create_hyp_mappings(ti, ti + 1, PAGE_HYP);
-	if (ret)
-		goto error;
-
+	/* Make sure the host task fpsimd state is visible to hyp: */
 	ret = create_hyp_mappings(fpsimd, fpsimd + 1, PAGE_HYP);
 	if (ret)
 		goto error;
 
-	vcpu->arch.host_thread_info = kern_hyp_va(ti);
 	vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd);
 error:
 	return ret;
-- 
2.30.1.766.gb4fecdf3b7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 04/10] KVM: arm64: Support smp_processor_id() in nVHE hyp
  2021-03-04 11:54 [PATCH 00/10] FPSIMD restore bypass and protecting Andrew Scull
                   ` (2 preceding siblings ...)
  2021-03-04 11:54 ` [PATCH 03/10] KVM: arm64: Unmap host task thread flags from hyp Andrew Scull
@ 2021-03-04 11:54 ` Andrew Scull
  2021-03-11 10:35   ` Quentin Perret
  2021-03-04 11:54 ` [PATCH 05/10] KVM: arm64: Track where vcpu FP state was last loaded Andrew Scull
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 16+ messages in thread
From: Andrew Scull @ 2021-03-04 11:54 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, maz, catalin.marinas, will, Dave.Martin

smp_procesor_id() works off of the cpu_number per-cpu variable. Create
an nVHE hyp version of cpu_number and initialize it to the same value as
the host when creating the hyp per-cpu regions.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/arm.c              | 2 ++
 arch/arm64/kvm/hyp/nvhe/hyp-smp.c | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 26ccc369cf11..e3edea8379f3 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -54,6 +54,7 @@ DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector);
 static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
 unsigned long kvm_arm_hyp_percpu_base[NR_CPUS];
 DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
+DECLARE_KVM_NVHE_PER_CPU(int, cpu_number);
 
 /* The VMID used in the VTTBR */
 static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
@@ -1740,6 +1741,7 @@ static int init_hyp_mode(void)
 		page_addr = page_address(page);
 		memcpy(page_addr, CHOOSE_NVHE_SYM(__per_cpu_start), nvhe_percpu_size());
 		kvm_arm_hyp_percpu_base[cpu] = (unsigned long)page_addr;
+		*per_cpu_ptr_nvhe_sym(cpu_number, cpu) = cpu;
 	}
 
 	/*
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
index 879559057dee..86f952b1de18 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
@@ -8,6 +8,8 @@
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
 
+DEFINE_PER_CPU_READ_MOSTLY(int, cpu_number);
+
 /*
  * nVHE copy of data structures tracking available CPU cores.
  * Only entries for CPUs that were online at KVM init are populated.
-- 
2.30.1.766.gb4fecdf3b7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 05/10] KVM: arm64: Track where vcpu FP state was last loaded
  2021-03-04 11:54 [PATCH 00/10] FPSIMD restore bypass and protecting Andrew Scull
                   ` (3 preceding siblings ...)
  2021-03-04 11:54 ` [PATCH 04/10] KVM: arm64: Support smp_processor_id() in nVHE hyp Andrew Scull
@ 2021-03-04 11:54 ` Andrew Scull
  2021-03-11 10:37   ` Quentin Perret
  2021-03-04 11:54 ` [PATCH 06/10] KVM: arm64: Avoid needlessly reloading guest FP state Andrew Scull
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 16+ messages in thread
From: Andrew Scull @ 2021-03-04 11:54 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, maz, catalin.marinas, will, Dave.Martin

Keep track of the cpu that a vcpu's FP state was last loaded onto. This
information is needed in order to tell whether a vcpu's latest FP state
is already loaded on a cpu to avoid unnecessary reloading.

The method follows the pattern used by thread_struct whereby an
fpsimd_cpu field is added and updated when the state is loaded.

Signed-off-by: Andrew Scull <ascull@google.com>
Cc: Dave Martin <Dave.Martin@arm.com>
---
 arch/arm64/include/asm/kvm_host.h       | 1 +
 arch/arm64/kvm/arm.c                    | 2 ++
 arch/arm64/kvm/hyp/include/hyp/switch.h | 2 ++
 3 files changed, 5 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 8a559fa2f237..a01194371ae5 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -306,6 +306,7 @@ struct kvm_vcpu_arch {
 	struct kvm_guest_debug_arch vcpu_debug_state;
 	struct kvm_guest_debug_arch external_debug_state;
 
+	int fpsimd_cpu;
 	struct user_fpsimd_state *host_fpsimd_state;	/* hyp VA */
 
 	struct {
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e3edea8379f3..87141c8d63e6 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -313,6 +313,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
 
+	vcpu->arch.fpsimd_cpu = NR_CPUS;
+
 	/* Set up the timer */
 	kvm_timer_vcpu_init(vcpu);
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 8eb1f87f9119..1afee8557ddf 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -259,6 +259,8 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 	if (!(read_sysreg(hcr_el2) & HCR_RW))
 		write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
 
+	vcpu->arch.fpsimd_cpu = smp_processor_id();
+
 	vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
 
 	return true;
-- 
2.30.1.766.gb4fecdf3b7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 06/10] KVM: arm64: Avoid needlessly reloading guest FP state
  2021-03-04 11:54 [PATCH 00/10] FPSIMD restore bypass and protecting Andrew Scull
                   ` (4 preceding siblings ...)
  2021-03-04 11:54 ` [PATCH 05/10] KVM: arm64: Track where vcpu FP state was last loaded Andrew Scull
@ 2021-03-04 11:54 ` Andrew Scull
  2021-03-04 11:54 ` [PATCH 07/10] KVM: arm64: Separate host and hyp vcpu FP flags Andrew Scull
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Andrew Scull @ 2021-03-04 11:54 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, maz, catalin.marinas, will, Dave.Martin

When returning to a user task from a vcpu, keep track of the vcpu state
being in registers so that the state can be reinstated on the next entry
if the registers still contain the vcpu's latest state.

This avoids the need to trap and restore in the case that the vcpu's
registers are still in place. The state must still be saved when
switching away from the vcpu to allow movement to another core or the
task to load its own state.

Signed-off-by: Andrew Scull <ascull@google.com>
Cc: Dave Martin <Dave.Martin@arm.com>
---
 arch/arm64/include/asm/fpsimd.h |  1 +
 arch/arm64/kernel/fpsimd.c      | 11 +++++++++--
 arch/arm64/kvm/fpsimd.c         | 18 ++++++++++++++++--
 3 files changed, 26 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index bec5f14b622a..fc0b932211f0 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -48,6 +48,7 @@ extern void fpsimd_update_current_state(struct user_fpsimd_state const *state);
 extern void fpsimd_bind_task_to_cpu(void);
 extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state,
 				     void *sve_state, unsigned int sve_vl);
+extern bool fpsimd_is_bound_to_cpu(struct user_fpsimd_state *state);
 
 extern void fpsimd_flush_task_state(struct task_struct *target);
 extern void fpsimd_save_and_flush_cpu_state(void);
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 062b21f30f94..683675b5d198 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1009,8 +1009,7 @@ void fpsimd_thread_switch(struct task_struct *next)
 	 * state.  For kernel threads, FPSIMD registers are never loaded
 	 * and wrong_task and wrong_cpu will always be true.
 	 */
-	wrong_task = __this_cpu_read(fpsimd_last_state.st) !=
-					&next->thread.uw.fpsimd_state;
+	wrong_task = !fpsimd_is_bound_to_cpu(&next->thread.uw.fpsimd_state);
 	wrong_cpu = next->thread.fpsimd_cpu != smp_processor_id();
 
 	update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
@@ -1137,6 +1136,14 @@ void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state,
 	last->sve_vl = sve_vl;
 }
 
+bool fpsimd_is_bound_to_cpu(struct user_fpsimd_state *st)
+{
+	WARN_ON(!system_supports_fpsimd());
+	WARN_ON(!in_softirq() && !irqs_disabled());
+
+	return __this_cpu_read(fpsimd_last_state.st) == st;
+}
+
 /*
  * Load the userland FPSIMD state of 'current' from memory, but only if the
  * FPSIMD state already held in the registers is /not/ the most recent FPSIMD
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 3e5a02137643..dcc5bfad5408 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -51,6 +51,8 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
  */
 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 {
+	unsigned long flags;
+
 	BUG_ON(!current->mm);
 
 	vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
@@ -61,13 +63,25 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	if (!system_supports_fpsimd())
 		return;
 
-	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
+	local_irq_save(flags);
+
+	if (fpsimd_is_bound_to_cpu(&vcpu->arch.ctxt.fp_regs) &&
+	    vcpu->arch.fpsimd_cpu == smp_processor_id()) {
+		clear_thread_flag(TIF_FOREIGN_FPSTATE);
+		update_thread_flag(TIF_SVE, vcpu_has_sve(vcpu));
+
+		vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
+	} else {
+		vcpu->arch.flags |= KVM_ARM64_FP_HOST;
+	}
 
 	if (test_thread_flag(TIF_SVE))
 		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
 
 	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
 		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
+
+	local_irq_restore(flags);
 }
 
 
@@ -124,7 +138,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 	local_irq_save(flags);
 
 	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
-		fpsimd_save_and_flush_cpu_state();
+		fpsimd_thread_switch(current);
 
 		if (guest_has_sve)
 			__vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_s(SYS_ZCR_EL12);
-- 
2.30.1.766.gb4fecdf3b7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 07/10] KVM: arm64: Separate host and hyp vcpu FP flags
  2021-03-04 11:54 [PATCH 00/10] FPSIMD restore bypass and protecting Andrew Scull
                   ` (5 preceding siblings ...)
  2021-03-04 11:54 ` [PATCH 06/10] KVM: arm64: Avoid needlessly reloading guest FP state Andrew Scull
@ 2021-03-04 11:54 ` Andrew Scull
  2021-03-04 11:54 ` [PATCH 08/10] KVM: arm64: Pass the arch run struct explicitly Andrew Scull
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Andrew Scull @ 2021-03-04 11:54 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, maz, catalin.marinas, will, Dave.Martin

The FP flags on the vcpu are used as arguments to hyp and to track the
state as hyp runs. In protected mode, this sort of state needs to be
safe from meddling by the host. Begin the separation with the FP flags.

Since protected mode is only available for nVHE and nVHE does not yet
support SVE, the SVE flags are left untouched.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_host.h       | 33 ++++++++++++++++++-------
 arch/arm64/kvm/fpsimd.c                 | 24 +++++++++---------
 arch/arm64/kvm/hyp/include/hyp/switch.h |  6 ++---
 arch/arm64/kvm/hyp/nvhe/switch.c        |  4 +--
 arch/arm64/kvm/hyp/vhe/switch.c         |  4 +--
 5 files changed, 44 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a01194371ae5..8c5242d4ed73 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -268,6 +268,16 @@ struct vcpu_reset_state {
 	bool		reset;
 };
 
+/*
+ * State that affects the behaviour of hyp when running a vcpu. In protected
+ * mode, the hypervisor will have a private copy of this state so that the host
+ * cannot interfere with the hyp while it is running.
+ */
+struct kvm_vcpu_arch_run {
+	/* Miscellaneous vcpu run state flags */
+	u64 flags;
+};
+
 struct kvm_vcpu_arch {
 	struct kvm_cpu_context ctxt;
 	void *sve_state;
@@ -289,6 +299,9 @@ struct kvm_vcpu_arch {
 	/* Miscellaneous vcpu state flags */
 	u64 flags;
 
+	/* State to manage running of the vcpu by hyp */
+	struct kvm_vcpu_arch_run run;
+
 	/*
 	 * We maintain more than a single set of debug registers to support
 	 * debugging the guest from the host and to maintain separate host and
@@ -390,15 +403,17 @@ struct kvm_vcpu_arch {
 
 /* vcpu_arch flags field values: */
 #define KVM_ARM64_DEBUG_DIRTY		(1 << 0) /* vcpu is using debug */
-#define KVM_ARM64_FP_ENABLED		(1 << 1) /* guest FP regs loaded */
-#define KVM_ARM64_FP_HOST		(1 << 2) /* host FP regs loaded */
-#define KVM_ARM64_HOST_SVE_IN_USE	(1 << 3) /* backup for host TIF_SVE */
-#define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
-#define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
-#define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
-#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 7) /* PTRAUTH exposed to guest */
-#define KVM_ARM64_PENDING_EXCEPTION	(1 << 8) /* Exception pending */
-#define KVM_ARM64_EXCEPT_MASK		(7 << 9) /* Target EL/MODE */
+#define KVM_ARM64_HOST_SVE_IN_USE	(1 << 1) /* backup for host TIF_SVE */
+#define KVM_ARM64_HOST_SVE_ENABLED	(1 << 2) /* SVE enabled for EL0 */
+#define KVM_ARM64_GUEST_HAS_SVE		(1 << 3) /* SVE exposed to guest */
+#define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 4) /* SVE config completed */
+#define KVM_ARM64_GUEST_HAS_PTRAUTH	(1 << 5) /* PTRAUTH exposed to guest */
+#define KVM_ARM64_PENDING_EXCEPTION	(1 << 6) /* Exception pending */
+#define KVM_ARM64_EXCEPT_MASK		(7 << 7) /* Target EL/MODE */
+
+/* vcpu_arch_run flags field values: */
+#define KVM_ARM64_RUN_FP_ENABLED	(1 << 0) /* guest FP regs loaded */
+#define KVM_ARM64_RUN_FP_HOST		(1 << 1) /* host FP regs loaded */
 
 /*
  * When KVM_ARM64_PENDING_EXCEPTION is set, KVM_ARM64_EXCEPT_MASK can
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index dcc5bfad5408..74a4b55d1b37 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -43,8 +43,9 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
  * Prepare vcpu for saving the host's FPSIMD state and loading the guest's.
  * The actual loading is done by the FPSIMD access trap taken to hyp.
  *
- * Here, we just set the correct metadata to indicate that the FPSIMD
- * state in the cpu regs (if any) belongs to current on the host.
+ * Here, we just set the correct metadata to indicate that the FPSIMD state in
+ * the cpu regs (if any) belongs to current on the host and will need to be
+ * saved before replacing it.
  *
  * TIF_SVE is backed up here, since it may get clobbered with guest state.
  * This flag is restored by kvm_arch_vcpu_put_fp(vcpu).
@@ -55,9 +56,10 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 
 	BUG_ON(!current->mm);
 
-	vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
-			      KVM_ARM64_FP_HOST |
-			      KVM_ARM64_HOST_SVE_IN_USE |
+	vcpu->arch.run.flags &= ~(KVM_ARM64_RUN_FP_ENABLED |
+				  KVM_ARM64_RUN_FP_HOST);
+
+	vcpu->arch.flags &= ~(KVM_ARM64_HOST_SVE_IN_USE |
 			      KVM_ARM64_HOST_SVE_ENABLED);
 
 	if (!system_supports_fpsimd())
@@ -70,9 +72,9 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 		clear_thread_flag(TIF_FOREIGN_FPSTATE);
 		update_thread_flag(TIF_SVE, vcpu_has_sve(vcpu));
 
-		vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
+		vcpu->arch.run.flags |= KVM_ARM64_RUN_FP_ENABLED;
 	} else {
-		vcpu->arch.flags |= KVM_ARM64_FP_HOST;
+		vcpu->arch.run.flags |= KVM_ARM64_RUN_FP_HOST;
 	}
 
 	if (test_thread_flag(TIF_SVE))
@@ -99,8 +101,8 @@ void kvm_arch_vcpu_sync_fp_before_hyp(struct kvm_vcpu *vcpu)
 		return;
 
 	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
-		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
-				      KVM_ARM64_FP_HOST);
+		vcpu->arch.run.flags &= ~(KVM_ARM64_RUN_FP_ENABLED |
+					  KVM_ARM64_RUN_FP_HOST);
 }
 
 /*
@@ -113,7 +115,7 @@ void kvm_arch_vcpu_sync_fp_after_hyp(struct kvm_vcpu *vcpu)
 {
 	WARN_ON_ONCE(!irqs_disabled());
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
+	if (vcpu->arch.run.flags & KVM_ARM64_RUN_FP_ENABLED) {
 		fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.fp_regs,
 					 vcpu->arch.sve_state,
 					 vcpu->arch.sve_max_vl);
@@ -137,7 +139,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 
 	local_irq_save(flags);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
+	if (vcpu->arch.run.flags & KVM_ARM64_RUN_FP_ENABLED) {
 		fpsimd_thread_switch(current);
 
 		if (guest_has_sve)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 1afee8557ddf..3f299c7d42cd 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -227,7 +227,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 
 	isb();
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_HOST) {
+	if (vcpu->arch.run.flags & KVM_ARM64_RUN_FP_HOST) {
 		/*
 		 * In the SVE case, VHE is assumed: it is enforced by
 		 * Kconfig and kvm_arch_init().
@@ -243,7 +243,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 			__fpsimd_save_state(vcpu->arch.host_fpsimd_state);
 		}
 
-		vcpu->arch.flags &= ~KVM_ARM64_FP_HOST;
+		vcpu->arch.run.flags &= ~KVM_ARM64_RUN_FP_HOST;
 	}
 
 	if (sve_guest) {
@@ -261,7 +261,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.fpsimd_cpu = smp_processor_id();
 
-	vcpu->arch.flags |= KVM_ARM64_FP_ENABLED;
+	vcpu->arch.run.flags |= KVM_ARM64_RUN_FP_ENABLED;
 
 	return true;
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 6fc1e0a5adaa..f0a32c993ac4 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -41,7 +41,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val = CPTR_EL2_DEFAULT;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TZ | CPTR_EL2_TAM;
-	if (!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED)) {
+	if (!(vcpu->arch.run.flags & KVM_ARM64_RUN_FP_ENABLED)) {
 		val |= CPTR_EL2_TFP;
 		__activate_traps_fpsimd32(vcpu);
 	}
@@ -230,7 +230,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	__sysreg_restore_state_nvhe(host_ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
+	if (vcpu->arch.run.flags & KVM_ARM64_RUN_FP_ENABLED)
 		__fpsimd_save_fpexc32(vcpu);
 
 	/*
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index f6f60a537b3e..5bb6a2cf574d 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -54,7 +54,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val |= CPTR_EL2_TAM;
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
+	if (vcpu->arch.run.flags & KVM_ARM64_RUN_FP_ENABLED) {
 		if (vcpu_has_sve(vcpu))
 			val |= CPACR_EL1_ZEN;
 	} else {
@@ -151,7 +151,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 
 	sysreg_restore_host_state_vhe(host_ctxt);
 
-	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
+	if (vcpu->arch.run.flags & KVM_ARM64_RUN_FP_ENABLED)
 		__fpsimd_save_fpexc32(vcpu);
 
 	__debug_switch_to_host(vcpu);
-- 
2.30.1.766.gb4fecdf3b7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 08/10] KVM: arm64: Pass the arch run struct explicitly
  2021-03-04 11:54 [PATCH 00/10] FPSIMD restore bypass and protecting Andrew Scull
                   ` (6 preceding siblings ...)
  2021-03-04 11:54 ` [PATCH 07/10] KVM: arm64: Separate host and hyp vcpu FP flags Andrew Scull
@ 2021-03-04 11:54 ` Andrew Scull
  2021-03-04 11:54 ` [PATCH 09/10] KVM: arm64: Use hyp-private run struct in protected mode Andrew Scull
  2021-03-04 11:54 ` [PATCH 10/10] RFC: KVM: arm64: Manage FPSIMD state at EL2 for protected vCPUs Andrew Scull
  9 siblings, 0 replies; 16+ messages in thread
From: Andrew Scull @ 2021-03-04 11:54 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, maz, catalin.marinas, will, Dave.Martin

Rather than accessing struct kvm_vcpu_arch_run via the vcpu, pass it
explicitly as an argument where needed. This will allow a hyp-private
copy of the struct to be swapped in when running in protected mode.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 15 +++++++++------
 arch/arm64/kvm/hyp/nvhe/switch.c        |  8 ++++----
 arch/arm64/kvm/hyp/vhe/switch.c         |  2 +-
 3 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 3f299c7d42cd..53120cccd2a5 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -178,7 +178,8 @@ static inline bool __populate_fault_info(struct kvm_vcpu *vcpu)
 }
 
 /* Check for an FPSIMD/SVE trap and handle as appropriate */
-static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
+static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu,
+				       struct kvm_vcpu_arch_run *run)
 {
 	bool vhe, sve_guest, sve_host;
 	u8 esr_ec;
@@ -227,7 +228,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 
 	isb();
 
-	if (vcpu->arch.run.flags & KVM_ARM64_RUN_FP_HOST) {
+	if (run->flags & KVM_ARM64_RUN_FP_HOST) {
 		/*
 		 * In the SVE case, VHE is assumed: it is enforced by
 		 * Kconfig and kvm_arch_init().
@@ -243,7 +244,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 			__fpsimd_save_state(vcpu->arch.host_fpsimd_state);
 		}
 
-		vcpu->arch.run.flags &= ~KVM_ARM64_RUN_FP_HOST;
+		run->flags &= ~KVM_ARM64_RUN_FP_HOST;
 	}
 
 	if (sve_guest) {
@@ -261,7 +262,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.fpsimd_cpu = smp_processor_id();
 
-	vcpu->arch.run.flags |= KVM_ARM64_RUN_FP_ENABLED;
+	run->flags |= KVM_ARM64_RUN_FP_ENABLED;
 
 	return true;
 }
@@ -389,7 +390,9 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu)
  * the guest, false when we should restore the host state and return to the
  * main run loop.
  */
-static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
+static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu,
+				    struct kvm_vcpu_arch_run *run,
+				    u64 *exit_code)
 {
 	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
 		vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
@@ -430,7 +433,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 	 * undefined instruction exception to the guest.
 	 * Similarly for trapped SVE accesses.
 	 */
-	if (__hyp_handle_fpsimd(vcpu))
+	if (__hyp_handle_fpsimd(vcpu, run))
 		goto guest;
 
 	if (__hyp_handle_ptrauth(vcpu))
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index f0a32c993ac4..076c2200324f 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -32,7 +32,7 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data);
 DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
 DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
 
-static void __activate_traps(struct kvm_vcpu *vcpu)
+static void __activate_traps(struct kvm_vcpu *vcpu, struct kvm_vcpu_arch_run *run)
 {
 	u64 val;
 
@@ -41,7 +41,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val = CPTR_EL2_DEFAULT;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TZ | CPTR_EL2_TAM;
-	if (!(vcpu->arch.run.flags & KVM_ARM64_RUN_FP_ENABLED)) {
+	if (!(run->flags & KVM_ARM64_RUN_FP_ENABLED)) {
 		val |= CPTR_EL2_TFP;
 		__activate_traps_fpsimd32(vcpu);
 	}
@@ -206,7 +206,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	__sysreg_restore_state_nvhe(guest_ctxt);
 
 	__load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu));
-	__activate_traps(vcpu);
+	__activate_traps(vcpu, &vcpu->arch.run);
 
 	__hyp_vgic_restore_state(vcpu);
 	__timer_enable_traps(vcpu);
@@ -218,7 +218,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 		exit_code = __guest_enter(vcpu);
 
 		/* And we're baaack! */
-	} while (fixup_guest_exit(vcpu, &exit_code));
+	} while (fixup_guest_exit(vcpu, &vcpu->arch.run, &exit_code));
 
 	__sysreg_save_state_nvhe(guest_ctxt);
 	__sysreg32_save_state(vcpu);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 5bb6a2cf574d..ff3ce150d636 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -143,7 +143,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 		exit_code = __guest_enter(vcpu);
 
 		/* And we're baaack! */
-	} while (fixup_guest_exit(vcpu, &exit_code));
+	} while (fixup_guest_exit(vcpu, &vcpu->arch.run, &exit_code));
 
 	sysreg_save_guest_state_vhe(guest_ctxt);
 
-- 
2.30.1.766.gb4fecdf3b7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 09/10] KVM: arm64: Use hyp-private run struct in protected mode
  2021-03-04 11:54 [PATCH 00/10] FPSIMD restore bypass and protecting Andrew Scull
                   ` (7 preceding siblings ...)
  2021-03-04 11:54 ` [PATCH 08/10] KVM: arm64: Pass the arch run struct explicitly Andrew Scull
@ 2021-03-04 11:54 ` Andrew Scull
  2021-03-04 11:54 ` [PATCH 10/10] RFC: KVM: arm64: Manage FPSIMD state at EL2 for protected vCPUs Andrew Scull
  9 siblings, 0 replies; 16+ messages in thread
From: Andrew Scull @ 2021-03-04 11:54 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, maz, catalin.marinas, will, Dave.Martin

The run struct affects how hyp handles the guest's state so it needs to
be kept safe from the host in protected mode. Copy the relevant values
into hyp-private memory while running a vcpu to achieve this.

In the traditional, non-protected, mode, there's no need to protect the
values from the host so the run struct in host memory is used directly.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 33 +++++++++++++++++++++++++++++---
 1 file changed, 30 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 076c2200324f..a0fbaf0ee309 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -165,9 +165,26 @@ static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
 		write_sysreg(pmu->events_host, pmcntenset_el0);
 }
 
+/* Snapshot state from the host to private memory and sanitize them. */
+void __sync_vcpu_before_run(struct kvm_vcpu *vcpu, struct kvm_vcpu_arch_run *run)
+{
+	run->flags = vcpu->arch.run.flags;
+
+	/* Clear host state to make misuse apparent. */
+	vcpu->arch.run.flags = 0;
+}
+
+/* Sanitize the run state before writing it back to the host. */
+void __sync_vcpu_after_run(struct kvm_vcpu *vcpu, struct kvm_vcpu_arch_run *run)
+{
+	vcpu->arch.run.flags = run->flags;
+}
+
 /* Switch to the guest for legacy non-VHE systems */
 int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
+	struct kvm_vcpu_arch_run protected_run;
+	struct kvm_vcpu_arch_run *run;
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
 	bool pmu_switch_needed;
@@ -184,6 +201,13 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 		pmr_sync();
 	}
 
+	if (is_protected_kvm_enabled()) {
+		run = &protected_run;
+		__sync_vcpu_before_run(vcpu, run);
+	} else {
+		run = &vcpu->arch.run;
+	}
+
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
 	host_ctxt->__hyp_running_vcpu = vcpu;
 	guest_ctxt = &vcpu->arch.ctxt;
@@ -206,7 +230,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	__sysreg_restore_state_nvhe(guest_ctxt);
 
 	__load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu));
-	__activate_traps(vcpu, &vcpu->arch.run);
+	__activate_traps(vcpu, run);
 
 	__hyp_vgic_restore_state(vcpu);
 	__timer_enable_traps(vcpu);
@@ -218,7 +242,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 		exit_code = __guest_enter(vcpu);
 
 		/* And we're baaack! */
-	} while (fixup_guest_exit(vcpu, &vcpu->arch.run, &exit_code));
+	} while (fixup_guest_exit(vcpu, run, &exit_code));
 
 	__sysreg_save_state_nvhe(guest_ctxt);
 	__sysreg32_save_state(vcpu);
@@ -230,7 +254,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	__sysreg_restore_state_nvhe(host_ctxt);
 
-	if (vcpu->arch.run.flags & KVM_ARM64_RUN_FP_ENABLED)
+	if (run->flags & KVM_ARM64_RUN_FP_ENABLED)
 		__fpsimd_save_fpexc32(vcpu);
 
 	/*
@@ -248,6 +272,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	host_ctxt->__hyp_running_vcpu = NULL;
 
+	if (is_protected_kvm_enabled())
+		__sync_vcpu_after_run(vcpu, run);
+
 	return exit_code;
 }
 
-- 
2.30.1.766.gb4fecdf3b7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 10/10] RFC: KVM: arm64: Manage FPSIMD state at EL2 for protected vCPUs
  2021-03-04 11:54 [PATCH 00/10] FPSIMD restore bypass and protecting Andrew Scull
                   ` (8 preceding siblings ...)
  2021-03-04 11:54 ` [PATCH 09/10] KVM: arm64: Use hyp-private run struct in protected mode Andrew Scull
@ 2021-03-04 11:54 ` Andrew Scull
  9 siblings, 0 replies; 16+ messages in thread
From: Andrew Scull @ 2021-03-04 11:54 UTC (permalink / raw)
  To: kvmarm; +Cc: kernel-team, maz, catalin.marinas, will, Dave.Martin

Protected VM's FPSIMD state must not be exposed to the host. Since the
FPSIMD state is switched lazily, hyp must take precautions to prevent
leaks. Do this by trapping FP access to EL2 to lazily save a protected
guest's state and lazily restore the hosts's state.

This is a little ahead of its time since it requires knowledge which
vcpus are protected or not (see the TODO).

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_host.h       |  5 ++
 arch/arm64/include/asm/kvm_hyp.h        |  1 +
 arch/arm64/kvm/hyp/include/hyp/switch.h | 21 +++++++-
 arch/arm64/kvm/hyp/nvhe/hyp-main.c      | 24 ++++++++++
 arch/arm64/kvm/hyp/nvhe/switch.c        | 64 +++++++++++++++++++++----
 arch/arm64/kvm/hyp/vhe/switch.c         |  1 +
 6 files changed, 105 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 8c5242d4ed73..5e39e1d7b41b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -236,6 +236,8 @@ struct kvm_pmu_events {
 
 struct kvm_host_data {
 	struct kvm_cpu_context host_ctxt;
+	uint64_t fpexc32_el2;
+	struct user_fpsimd_state fpsimd_state;
 	struct kvm_pmu_events pmu_events;
 };
 
@@ -274,6 +276,9 @@ struct vcpu_reset_state {
  * cannot interfere with the hyp while it is running.
  */
 struct kvm_vcpu_arch_run {
+	/* Whether the vcpu is running as part of a protected vm */
+	bool protected;
+
 	/* Miscellaneous vcpu run state flags */
 	u64 flags;
 };
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index c0450828378b..35f5c939a222 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -13,6 +13,7 @@
 #include <asm/sysreg.h>
 
 DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
+DECLARE_PER_CPU(struct kvm_vcpu *, kvm_protected_vcpu_fpsimd);
 DECLARE_PER_CPU(unsigned long, kvm_hyp_vector);
 DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 53120cccd2a5..f387e8aa25df 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -181,6 +181,7 @@ static inline bool __populate_fault_info(struct kvm_vcpu *vcpu)
 static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu,
 				       struct kvm_vcpu_arch_run *run)
 {
+	struct kvm_host_data *host_data;
 	bool vhe, sve_guest, sve_host;
 	u8 esr_ec;
 
@@ -228,12 +229,27 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu,
 
 	isb();
 
+
+	if (run->protected) {
+		/* A protected vcpu's state might already be in registers. */
+		if (__this_cpu_read(kvm_protected_vcpu_fpsimd) == vcpu &&
+		    vcpu->arch.fpsimd_cpu == smp_processor_id()) {
+			goto out;
+		}
+
+		host_data = this_cpu_ptr(&kvm_host_data);
+	}
+
 	if (run->flags & KVM_ARM64_RUN_FP_HOST) {
 		/*
 		 * In the SVE case, VHE is assumed: it is enforced by
 		 * Kconfig and kvm_arch_init().
 		 */
-		if (sve_host) {
+		if (run->protected) {
+			if (cpus_have_const_cap(ARM64_HAS_32BIT_EL1))
+				host_data->fpexc32_el2 = read_sysreg(fpexc32_el2);
+			__fpsimd_save_state(&host_data->fpsimd_state);
+		} else if (sve_host) {
 			struct thread_struct *thread = container_of(
 				vcpu->arch.host_fpsimd_state,
 				struct thread_struct, uw.fpsimd_state);
@@ -260,8 +276,11 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu,
 	if (!(read_sysreg(hcr_el2) & HCR_RW))
 		write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
 
+	__this_cpu_write(kvm_protected_vcpu_fpsimd, run->protected ? vcpu : NULL);
+
 	vcpu->arch.fpsimd_cpu = smp_processor_id();
 
+out:
 	run->flags |= KVM_ARM64_RUN_FP_ENABLED;
 
 	return true;
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index f012f8665ecc..bb77578c79d0 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -166,6 +166,27 @@ static void handle_host_smc(struct kvm_cpu_context *host_ctxt)
 	kvm_skip_host_instr();
 }
 
+static void handle_host_fpsimd(struct kvm_cpu_context *host_ctxt)
+{
+	struct kvm_host_data *host_data;
+
+	write_sysreg(read_sysreg(cptr_el2) & ~(u64)CPTR_EL2_TFP, cptr_el2);
+
+	/*
+	 * An FPSIMD trap from the host means the host's state has been saved
+	 * by hyp and needs to be restored.
+	 */
+	if (__this_cpu_read(kvm_protected_vcpu_fpsimd) == NULL)
+		return;
+
+	host_data = this_cpu_ptr(&kvm_host_data);
+	if (cpus_have_const_cap(ARM64_HAS_32BIT_EL1))
+		write_sysreg(host_data->fpexc32_el2, fpexc32_el2);
+	__fpsimd_restore_state(&host_data->fpsimd_state);
+
+	__this_cpu_write(kvm_protected_vcpu_fpsimd, NULL);
+}
+
 void handle_trap(struct kvm_cpu_context *host_ctxt)
 {
 	u64 esr = read_sysreg_el2(SYS_ESR);
@@ -177,6 +198,9 @@ void handle_trap(struct kvm_cpu_context *host_ctxt)
 	case ESR_ELx_EC_SMC64:
 		handle_host_smc(host_ctxt);
 		break;
+	case ESR_ELx_EC_FP_ASIMD:
+		handle_host_fpsimd(host_ctxt);
+		break;
 	default:
 		hyp_panic();
 	}
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index a0fbaf0ee309..5723baea14f1 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -30,8 +30,11 @@
 /* Non-VHE specific context */
 DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data);
 DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
+DEFINE_PER_CPU(struct kvm_vcpu *, kvm_protected_vcpu_fpsimd);
 DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
 
+static DEFINE_PER_CPU(struct kvm_vcpu_arch_run *, kvm_hyp_vcpu_run);
+
 static void __activate_traps(struct kvm_vcpu *vcpu, struct kvm_vcpu_arch_run *run)
 {
 	u64 val;
@@ -64,10 +67,12 @@ static void __activate_traps(struct kvm_vcpu *vcpu, struct kvm_vcpu_arch_run *ru
 	}
 }
 
-static void __deactivate_traps(struct kvm_vcpu *vcpu)
+static void __deactivate_traps(struct kvm_vcpu *vcpu, struct kvm_vcpu_arch_run *run)
 {
 	extern char __kvm_hyp_host_vector[];
 	u64 mdcr_el2;
+	u64 hcr_el2;
+	u64 cptr_el2;
 
 	___deactivate_traps(vcpu);
 
@@ -95,12 +100,18 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 	mdcr_el2 &= MDCR_EL2_HPMN_MASK;
 	mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
 
-	write_sysreg(mdcr_el2, mdcr_el2);
 	if (is_protected_kvm_enabled())
-		write_sysreg(HCR_HOST_NVHE_PROTECTED_FLAGS, hcr_el2);
+		hcr_el2 = HCR_HOST_NVHE_PROTECTED_FLAGS;
 	else
-		write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
-	write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
+		hcr_el2 = HCR_HOST_NVHE_FLAGS;
+
+	cptr_el2 = CPTR_EL2_DEFAULT;
+	if (run->protected)
+		cptr_el2 |= CPTR_EL2_TFP;
+
+	write_sysreg(mdcr_el2, mdcr_el2);
+	write_sysreg(hcr_el2, hcr_el2);
+	write_sysreg(cptr_el2, cptr_el2);
 	write_sysreg(__kvm_hyp_host_vector, vbar_el2);
 }
 
@@ -172,11 +183,36 @@ void __sync_vcpu_before_run(struct kvm_vcpu *vcpu, struct kvm_vcpu_arch_run *run
 
 	/* Clear host state to make misuse apparent. */
 	vcpu->arch.run.flags = 0;
+
+	if (run->protected) {
+		/*
+		 * For protected vCPUs, always initially disable FPSIMD so we
+		 * can avoid saving the state if it isn't used, but if it is
+		 * used, only save the state for the host if the host state is
+		 * loaded.
+		 */
+		run->flags &= ~(KVM_ARM64_RUN_FP_ENABLED |
+				KVM_ARM64_RUN_FP_HOST);
+		if (__this_cpu_read(kvm_protected_vcpu_fpsimd) == NULL)
+			run->flags |= KVM_ARM64_RUN_FP_HOST;
+	} else {
+		/*
+		 * For non-protecetd vCPUs on a system that can also host
+		 * protected vCPUs, ensure protected vCPU FPSIMD state isn't
+		 * used by another vCPU or saved as the host state.
+		 */
+		if (__this_cpu_read(kvm_protected_vcpu_fpsimd) != NULL)
+			run->flags &= ~(KVM_ARM64_RUN_FP_ENABLED |
+					KVM_ARM64_RUN_FP_HOST);
+	}
 }
 
 /* Sanitize the run state before writing it back to the host. */
 void __sync_vcpu_after_run(struct kvm_vcpu *vcpu, struct kvm_vcpu_arch_run *run)
 {
+	if (run->protected)
+		return;
+
 	vcpu->arch.run.flags = run->flags;
 }
 
@@ -203,10 +239,13 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	if (is_protected_kvm_enabled()) {
 		run = &protected_run;
+		/* TODO: safely check vcpu and set run->protected accordingly. */
+		run->protected = true;
 		__sync_vcpu_before_run(vcpu, run);
 	} else {
 		run = &vcpu->arch.run;
 	}
+	__this_cpu_write(kvm_hyp_vcpu_run, run);
 
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
 	host_ctxt->__hyp_running_vcpu = vcpu;
@@ -249,14 +288,17 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	__timer_disable_traps(vcpu);
 	__hyp_vgic_save_state(vcpu);
 
-	__deactivate_traps(vcpu);
+	if (run->flags & KVM_ARM64_RUN_FP_ENABLED) {
+		__fpsimd_save_fpexc32(vcpu);
+		if (run->protected)
+			__fpsimd_save_state(&vcpu->arch.ctxt.fp_regs);
+	}
+
+	__deactivate_traps(vcpu, run);
 	__load_host_stage2();
 
 	__sysreg_restore_state_nvhe(host_ctxt);
 
-	if (run->flags & KVM_ARM64_RUN_FP_ENABLED)
-		__fpsimd_save_fpexc32(vcpu);
-
 	/*
 	 * This must come after restoring the host sysregs, since a non-VHE
 	 * system may enable SPE here and make use of the TTBRs.
@@ -284,15 +326,17 @@ void __noreturn hyp_panic(void)
 	u64 elr = read_sysreg_el2(SYS_ELR);
 	u64 par = read_sysreg_par();
 	bool restore_host = true;
+	struct kvm_vcpu_arch_run *run;
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_vcpu *vcpu;
 
+	run = __this_cpu_read(kvm_hyp_vcpu_run);
 	host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
 	vcpu = host_ctxt->__hyp_running_vcpu;
 
 	if (vcpu) {
 		__timer_disable_traps(vcpu);
-		__deactivate_traps(vcpu);
+		__deactivate_traps(vcpu, run);
 		__load_host_stage2();
 		__sysreg_restore_state_nvhe(host_ctxt);
 	}
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index ff3ce150d636..c1279d65d287 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -31,6 +31,7 @@ const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\n
 /* VHE specific context */
 DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data);
 DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
+DEFINE_PER_CPU(struct kvm_vcpu *, kvm_protected_vcpu_fpsimd);
 DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
 
 static void __activate_traps(struct kvm_vcpu *vcpu)
-- 
2.30.1.766.gb4fecdf3b7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 04/10] KVM: arm64: Support smp_processor_id() in nVHE hyp
  2021-03-04 11:54 ` [PATCH 04/10] KVM: arm64: Support smp_processor_id() in nVHE hyp Andrew Scull
@ 2021-03-11 10:35   ` Quentin Perret
  2021-03-12 11:20     ` Andrew Scull
  0 siblings, 1 reply; 16+ messages in thread
From: Quentin Perret @ 2021-03-11 10:35 UTC (permalink / raw)
  To: Andrew Scull; +Cc: kernel-team, maz, Dave.Martin, catalin.marinas, will, kvmarm

On Thursday 04 Mar 2021 at 11:54:47 (+0000), 'Andrew Scull' via kernel-team wrote:
> smp_procesor_id() works off of the cpu_number per-cpu variable. Create
> an nVHE hyp version of cpu_number and initialize it to the same value as
> the host when creating the hyp per-cpu regions.
> 
> Signed-off-by: Andrew Scull <ascull@google.com>
> ---
>  arch/arm64/kvm/arm.c              | 2 ++
>  arch/arm64/kvm/hyp/nvhe/hyp-smp.c | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 26ccc369cf11..e3edea8379f3 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -54,6 +54,7 @@ DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector);
>  static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
>  unsigned long kvm_arm_hyp_percpu_base[NR_CPUS];
>  DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
> +DECLARE_KVM_NVHE_PER_CPU(int, cpu_number);
>  
>  /* The VMID used in the VTTBR */
>  static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
> @@ -1740,6 +1741,7 @@ static int init_hyp_mode(void)
>  		page_addr = page_address(page);
>  		memcpy(page_addr, CHOOSE_NVHE_SYM(__per_cpu_start), nvhe_percpu_size());
>  		kvm_arm_hyp_percpu_base[cpu] = (unsigned long)page_addr;
> +		*per_cpu_ptr_nvhe_sym(cpu_number, cpu) = cpu;
>  	}
>  
>  	/*
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
> index 879559057dee..86f952b1de18 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
> @@ -8,6 +8,8 @@
>  #include <asm/kvm_hyp.h>
>  #include <asm/kvm_mmu.h>
>  
> +DEFINE_PER_CPU_READ_MOSTLY(int, cpu_number);

Is smp_processor_id() going to work at EL2 with CONFIG_DEBUG_PREEMPT=y ?
If not, then maybe we should have out own hyp_smp_processor_id() macro?

Thanks,
Quentin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 05/10] KVM: arm64: Track where vcpu FP state was last loaded
  2021-03-04 11:54 ` [PATCH 05/10] KVM: arm64: Track where vcpu FP state was last loaded Andrew Scull
@ 2021-03-11 10:37   ` Quentin Perret
  2021-03-11 10:40     ` Quentin Perret
  0 siblings, 1 reply; 16+ messages in thread
From: Quentin Perret @ 2021-03-11 10:37 UTC (permalink / raw)
  To: Andrew Scull; +Cc: kernel-team, maz, Dave.Martin, catalin.marinas, will, kvmarm

On Thursday 04 Mar 2021 at 11:54:48 (+0000), 'Andrew Scull' via kernel-team wrote:
> Keep track of the cpu that a vcpu's FP state was last loaded onto. This
> information is needed in order to tell whether a vcpu's latest FP state
> is already loaded on a cpu to avoid unnecessary reloading.
> 
> The method follows the pattern used by thread_struct whereby an
> fpsimd_cpu field is added and updated when the state is loaded.
> 
> Signed-off-by: Andrew Scull <ascull@google.com>
> Cc: Dave Martin <Dave.Martin@arm.com>
> ---
>  arch/arm64/include/asm/kvm_host.h       | 1 +
>  arch/arm64/kvm/arm.c                    | 2 ++
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 2 ++
>  3 files changed, 5 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 8a559fa2f237..a01194371ae5 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -306,6 +306,7 @@ struct kvm_vcpu_arch {
>  	struct kvm_guest_debug_arch vcpu_debug_state;
>  	struct kvm_guest_debug_arch external_debug_state;
>  
> +	int fpsimd_cpu;
>  	struct user_fpsimd_state *host_fpsimd_state;	/* hyp VA */
>  
>  	struct {
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index e3edea8379f3..87141c8d63e6 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -313,6 +313,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>  
>  	vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
>  
> +	vcpu->arch.fpsimd_cpu = NR_CPUS;

Is this supposed to be an invalid CPU number? If so, then NR_CPUS + 1 ?

Thanks,
Quentin
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 05/10] KVM: arm64: Track where vcpu FP state was last loaded
  2021-03-11 10:37   ` Quentin Perret
@ 2021-03-11 10:40     ` Quentin Perret
  0 siblings, 0 replies; 16+ messages in thread
From: Quentin Perret @ 2021-03-11 10:40 UTC (permalink / raw)
  To: Andrew Scull; +Cc: kernel-team, maz, Dave.Martin, catalin.marinas, will, kvmarm

On Thursday 11 Mar 2021 at 10:37:28 (+0000), Quentin Perret wrote:
> On Thursday 04 Mar 2021 at 11:54:48 (+0000), 'Andrew Scull' via kernel-team wrote:
> > Keep track of the cpu that a vcpu's FP state was last loaded onto. This
> > information is needed in order to tell whether a vcpu's latest FP state
> > is already loaded on a cpu to avoid unnecessary reloading.
> > 
> > The method follows the pattern used by thread_struct whereby an
> > fpsimd_cpu field is added and updated when the state is loaded.
> > 
> > Signed-off-by: Andrew Scull <ascull@google.com>
> > Cc: Dave Martin <Dave.Martin@arm.com>
> > ---
> >  arch/arm64/include/asm/kvm_host.h       | 1 +
> >  arch/arm64/kvm/arm.c                    | 2 ++
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 2 ++
> >  3 files changed, 5 insertions(+)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 8a559fa2f237..a01194371ae5 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -306,6 +306,7 @@ struct kvm_vcpu_arch {
> >  	struct kvm_guest_debug_arch vcpu_debug_state;
> >  	struct kvm_guest_debug_arch external_debug_state;
> >  
> > +	int fpsimd_cpu;
> >  	struct user_fpsimd_state *host_fpsimd_state;	/* hyp VA */
> >  
> >  	struct {
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index e3edea8379f3..87141c8d63e6 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -313,6 +313,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> >  
> >  	vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
> >  
> > +	vcpu->arch.fpsimd_cpu = NR_CPUS;
> 
> Is this supposed to be an invalid CPU number? If so, then NR_CPUS + 1 ?

Obviously not, forget me.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 04/10] KVM: arm64: Support smp_processor_id() in nVHE hyp
  2021-03-11 10:35   ` Quentin Perret
@ 2021-03-12 11:20     ` Andrew Scull
  2021-03-12 11:27       ` Andrew Scull
  0 siblings, 1 reply; 16+ messages in thread
From: Andrew Scull @ 2021-03-12 11:20 UTC (permalink / raw)
  To: Quentin Perret
  Cc: kernel-team, Marc Zyngier, Dave Martin, Catalin Marinas,
	Will Deacon, kvmarm

On Thu, 11 Mar 2021 at 10:35, Quentin Perret <qperret@google.com> wrote:
>
> On Thursday 04 Mar 2021 at 11:54:47 (+0000), 'Andrew Scull' via kernel-team wrote:
> > smp_procesor_id() works off of the cpu_number per-cpu variable. Create
> > an nVHE hyp version of cpu_number and initialize it to the same value as
> > the host when creating the hyp per-cpu regions.
> >
> > Signed-off-by: Andrew Scull <ascull@google.com>
> > ---
> >  arch/arm64/kvm/arm.c              | 2 ++
> >  arch/arm64/kvm/hyp/nvhe/hyp-smp.c | 2 ++
> >  2 files changed, 4 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 26ccc369cf11..e3edea8379f3 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -54,6 +54,7 @@ DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector);
> >  static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
> >  unsigned long kvm_arm_hyp_percpu_base[NR_CPUS];
> >  DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
> > +DECLARE_KVM_NVHE_PER_CPU(int, cpu_number);
> >
> >  /* The VMID used in the VTTBR */
> >  static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
> > @@ -1740,6 +1741,7 @@ static int init_hyp_mode(void)
> >               page_addr = page_address(page);
> >               memcpy(page_addr, CHOOSE_NVHE_SYM(__per_cpu_start), nvhe_percpu_size());
> >               kvm_arm_hyp_percpu_base[cpu] = (unsigned long)page_addr;
> > +             *per_cpu_ptr_nvhe_sym(cpu_number, cpu) = cpu;
> >       }
> >
> >       /*
> > diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
> > index 879559057dee..86f952b1de18 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
> > @@ -8,6 +8,8 @@
> >  #include <asm/kvm_hyp.h>
> >  #include <asm/kvm_mmu.h>
> >
> > +DEFINE_PER_CPU_READ_MOSTLY(int, cpu_number);
>
> Is smp_processor_id() going to work at EL2 with CONFIG_DEBUG_PREEMPT=y ?
> If not, then maybe we should have out own hyp_smp_processor_id() macro?

It's not, preempt_count() won't work,= at a minimum. I got far too
drawn into the other branch of that #ifdef.

We only use __smp_processor_id() in hyp, but that might not play too
nicely with VHE and forgetting the leading underscores will just lead
to nVHE issues that might not be caught in the build.

So you might be right that this is a case where we need to break from
standard APIs. And we can define `raw_smp_processor_id()` to something
that will give a compile time error when used in hyp to try and
prevent accidental misuse.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 04/10] KVM: arm64: Support smp_processor_id() in nVHE hyp
  2021-03-12 11:20     ` Andrew Scull
@ 2021-03-12 11:27       ` Andrew Scull
  0 siblings, 0 replies; 16+ messages in thread
From: Andrew Scull @ 2021-03-12 11:27 UTC (permalink / raw)
  To: Quentin Perret
  Cc: kernel-team, Marc Zyngier, Dave Martin, Catalin Marinas,
	Will Deacon, kvmarm

On Fri, 12 Mar 2021 at 11:20, Andrew Scull <ascull@google.com> wrote:
>
> On Thu, 11 Mar 2021 at 10:35, Quentin Perret <qperret@google.com> wrote:
> >
> > On Thursday 04 Mar 2021 at 11:54:47 (+0000), 'Andrew Scull' via kernel-team wrote:
> > > smp_procesor_id() works off of the cpu_number per-cpu variable. Create
> > > an nVHE hyp version of cpu_number and initialize it to the same value as
> > > the host when creating the hyp per-cpu regions.
> > >
> > > Signed-off-by: Andrew Scull <ascull@google.com>
> > > ---
> > >  arch/arm64/kvm/arm.c              | 2 ++
> > >  arch/arm64/kvm/hyp/nvhe/hyp-smp.c | 2 ++
> > >  2 files changed, 4 insertions(+)
> > >
> > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > index 26ccc369cf11..e3edea8379f3 100644
> > > --- a/arch/arm64/kvm/arm.c
> > > +++ b/arch/arm64/kvm/arm.c
> > > @@ -54,6 +54,7 @@ DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector);
> > >  static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
> > >  unsigned long kvm_arm_hyp_percpu_base[NR_CPUS];
> > >  DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
> > > +DECLARE_KVM_NVHE_PER_CPU(int, cpu_number);
> > >
> > >  /* The VMID used in the VTTBR */
> > >  static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
> > > @@ -1740,6 +1741,7 @@ static int init_hyp_mode(void)
> > >               page_addr = page_address(page);
> > >               memcpy(page_addr, CHOOSE_NVHE_SYM(__per_cpu_start), nvhe_percpu_size());
> > >               kvm_arm_hyp_percpu_base[cpu] = (unsigned long)page_addr;
> > > +             *per_cpu_ptr_nvhe_sym(cpu_number, cpu) = cpu;
> > >       }
> > >
> > >       /*
> > > diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
> > > index 879559057dee..86f952b1de18 100644
> > > --- a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
> > > +++ b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
> > > @@ -8,6 +8,8 @@
> > >  #include <asm/kvm_hyp.h>
> > >  #include <asm/kvm_mmu.h>
> > >
> > > +DEFINE_PER_CPU_READ_MOSTLY(int, cpu_number);
> >
> > Is smp_processor_id() going to work at EL2 with CONFIG_DEBUG_PREEMPT=y ?
> > If not, then maybe we should have out own hyp_smp_processor_id() macro?
>
> It's not, preempt_count() won't work,= at a minimum. I got far too
> drawn into the other branch of that #ifdef.
>
> We only use __smp_processor_id() in hyp, but that might not play too
> nicely with VHE and forgetting the leading underscores will just lead
> to nVHE issues that might not be caught in the build.
>
> So you might be right that this is a case where we need to break from
> standard APIs. And we can define `raw_smp_processor_id()` to something
> that will give a compile time error when used in hyp to try and
> prevent accidental misuse.

Having just read the build error again:

    :236: undefined reference to `__kvm_nvhe_debug_smp_processor_id'

Another option could be to define `debug_smp_processor_id()` for nvhe,
which is easy since preemption is always disabled, IIRC.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-03-12 11:27 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-04 11:54 [PATCH 00/10] FPSIMD restore bypass and protecting Andrew Scull
2021-03-04 11:54 ` [PATCH 01/10] KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to the host Andrew Scull
2021-03-04 11:54 ` [PATCH 02/10] KVM: arm64: Synchronize vcpu FPSIMD in " Andrew Scull
2021-03-04 11:54 ` [PATCH 03/10] KVM: arm64: Unmap host task thread flags from hyp Andrew Scull
2021-03-04 11:54 ` [PATCH 04/10] KVM: arm64: Support smp_processor_id() in nVHE hyp Andrew Scull
2021-03-11 10:35   ` Quentin Perret
2021-03-12 11:20     ` Andrew Scull
2021-03-12 11:27       ` Andrew Scull
2021-03-04 11:54 ` [PATCH 05/10] KVM: arm64: Track where vcpu FP state was last loaded Andrew Scull
2021-03-11 10:37   ` Quentin Perret
2021-03-11 10:40     ` Quentin Perret
2021-03-04 11:54 ` [PATCH 06/10] KVM: arm64: Avoid needlessly reloading guest FP state Andrew Scull
2021-03-04 11:54 ` [PATCH 07/10] KVM: arm64: Separate host and hyp vcpu FP flags Andrew Scull
2021-03-04 11:54 ` [PATCH 08/10] KVM: arm64: Pass the arch run struct explicitly Andrew Scull
2021-03-04 11:54 ` [PATCH 09/10] KVM: arm64: Use hyp-private run struct in protected mode Andrew Scull
2021-03-04 11:54 ` [PATCH 10/10] RFC: KVM: arm64: Manage FPSIMD state at EL2 for protected vCPUs Andrew Scull

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.