All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] Manage vcpu flags from the host
@ 2020-07-10  9:57 Andrew Scull
  2020-07-10  9:57 ` [PATCH 1/2] KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to " Andrew Scull
  2020-07-10  9:57 ` [PATCH 2/2] KVM: arm64: Leave vcpu FPSIMD synchronization in host Andrew Scull
  0 siblings, 2 replies; 6+ messages in thread
From: Andrew Scull @ 2020-07-10  9:57 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team, dave.martin

This is a clean up and no functional change is intended.

The aim is to keep management of the flags in the host and out of hyp. I
find this makes it easier to understand how the flags are used.

This is the result of a previous thread with Dave where it was explained
that there is an intent to synchronize the flags in clearly defined
places <20200707145713.287710-1-ascull@google.com>

Andrew Scull (2):
  KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to the host
  KVM: arm64: Leave vcpu FPSIMD synchronization in host

 arch/arm64/include/asm/kvm_host.h         |  7 ++-
 arch/arm64/kvm/arm.c                      |  4 +-
 arch/arm64/kvm/debug.c                    |  2 +
 arch/arm64/kvm/fpsimd.c                   | 57 ++++++++++++++++-------
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h |  2 -
 arch/arm64/kvm/hyp/include/hyp/switch.h   | 19 --------
 arch/arm64/kvm/hyp/nvhe/switch.c          |  3 +-
 arch/arm64/kvm/hyp/vhe/switch.c           |  3 +-
 8 files changed, 51 insertions(+), 46 deletions(-)

-- 
2.27.0.383.g050319c2ae-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to the host
  2020-07-10  9:57 [PATCH 0/2] Manage vcpu flags from the host Andrew Scull
@ 2020-07-10  9:57 ` Andrew Scull
  2020-07-10  9:57 ` [PATCH 2/2] KVM: arm64: Leave vcpu FPSIMD synchronization in host Andrew Scull
  1 sibling, 0 replies; 6+ messages in thread
From: Andrew Scull @ 2020-07-10  9:57 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team, dave.martin

Move the clearing of KVM_ARM64_DEBUG_DIRTY from being one of the last
things hyp does before exiting to the host to being one of the first
things the host does after hyp exits.

This means the host always manages the state of the bit and hyp simply
respects that in the context switch.

No functional change.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_host.h         | 2 +-
 arch/arm64/kvm/debug.c                    | 2 ++
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h | 2 --
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e1a32c0707bb..b06f24b5f443 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -404,7 +404,7 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_DEBUG_DIRTY		(1 << 0)
+#define KVM_ARM64_DEBUG_DIRTY		(1 << 0) /* vcpu is using debug */
 #define KVM_ARM64_FP_ENABLED		(1 << 1) /* guest FP regs loaded */
 #define KVM_ARM64_FP_HOST		(1 << 2) /* host FP regs loaded */
 #define KVM_ARM64_HOST_SVE_IN_USE	(1 << 3) /* backup for host TIF_SVE */
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 7a7e425616b5..e9932618a362 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -209,6 +209,8 @@ void kvm_arm_clear_debug(struct kvm_vcpu *vcpu)
 {
 	trace_kvm_arm_clear_debug(vcpu->guest_debug);
 
+	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
+
 	if (vcpu->guest_debug) {
 		restore_guest_debug_regs(vcpu);
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
index 0297dc63988c..50ca5d048017 100644
--- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
@@ -161,8 +161,6 @@ static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu)
 
 	__debug_save_state(guest_dbg, guest_ctxt);
 	__debug_restore_state(host_dbg, host_ctxt);
-
-	vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
 }
 
 #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
-- 
2.27.0.383.g050319c2ae-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] KVM: arm64: Leave vcpu FPSIMD synchronization in host
  2020-07-10  9:57 [PATCH 0/2] Manage vcpu flags from the host Andrew Scull
  2020-07-10  9:57 ` [PATCH 1/2] KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to " Andrew Scull
@ 2020-07-10  9:57 ` Andrew Scull
  2020-07-13 16:04   ` Dave Martin
  1 sibling, 1 reply; 6+ messages in thread
From: Andrew Scull @ 2020-07-10  9:57 UTC (permalink / raw)
  To: kvmarm; +Cc: maz, kernel-team, dave.martin

The task state can be checked by the host and the vcpu flags updated
before calling into hyp. This more neatly separates the concerns and
removes the need to map the task flags to EL2.

Hyp acts on the state provided to it by the host and updates it when
switching to the vcpu state.

No functional change.

Signed-off-by: Andrew Scull <ascull@google.com>
---
 arch/arm64/include/asm/kvm_host.h       |  5 +--
 arch/arm64/kvm/arm.c                    |  4 +-
 arch/arm64/kvm/fpsimd.c                 | 57 ++++++++++++++++++-------
 arch/arm64/kvm/hyp/include/hyp/switch.h | 19 ---------
 arch/arm64/kvm/hyp/nvhe/switch.c        |  3 +-
 arch/arm64/kvm/hyp/vhe/switch.c         |  3 +-
 6 files changed, 48 insertions(+), 43 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index b06f24b5f443..ca1621eeb9d9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -24,7 +24,6 @@
 #include <asm/fpsimd.h>
 #include <asm/kvm.h>
 #include <asm/kvm_asm.h>
-#include <asm/thread_info.h>
 
 #define __KVM_HAVE_ARCH_INTC_INITIALIZED
 
@@ -320,7 +319,6 @@ struct kvm_vcpu_arch {
 	struct kvm_guest_debug_arch vcpu_debug_state;
 	struct kvm_guest_debug_arch external_debug_state;
 
-	struct thread_info *host_thread_info;	/* hyp VA */
 	struct user_fpsimd_state *host_fpsimd_state;	/* hyp VA */
 
 	struct {
@@ -616,7 +614,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
-void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu);
+void kvm_arch_vcpu_enter_ctxsync_fp(struct kvm_vcpu *vcpu);
+void kvm_arch_vcpu_exit_ctxsync_fp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu);
 
 static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 98f05bdac3c1..c7a711ca840e 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -682,6 +682,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 
 		local_irq_disable();
 
+		kvm_arch_vcpu_enter_ctxsync_fp(vcpu);
+
 		kvm_vgic_flush_hwstate(vcpu);
 
 		/*
@@ -769,7 +771,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		if (static_branch_unlikely(&userspace_irqchip_in_use))
 			kvm_timer_sync_user(vcpu);
 
-		kvm_arch_vcpu_ctxsync_fp(vcpu);
+		kvm_arch_vcpu_exit_ctxsync_fp(vcpu);
 
 		/*
 		 * We may have taken a host interrupt in HYP mode (ie
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 3e081d556e81..aec91f43df62 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -27,22 +27,13 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
 {
 	int ret;
 
-	struct thread_info *ti = &current->thread_info;
 	struct user_fpsimd_state *fpsimd = &current->thread.uw.fpsimd_state;
 
-	/*
-	 * Make sure the host task thread flags and fpsimd state are
-	 * visible to hyp:
-	 */
-	ret = create_hyp_mappings(ti, ti + 1, PAGE_HYP);
-	if (ret)
-		goto error;
-
+	/* Make sure the host task fpsimd state is visible to hyp: */
 	ret = create_hyp_mappings(fpsimd, fpsimd + 1, PAGE_HYP);
 	if (ret)
 		goto error;
 
-	vcpu->arch.host_thread_info = kern_hyp_va(ti);
 	vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd);
 error:
 	return ret;
@@ -52,7 +43,7 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
  * Prepare vcpu for saving the host's FPSIMD state and loading the guest's.
  * The actual loading is done by the FPSIMD access trap taken to hyp.
  *
- * Here, we just set the correct metadata to indicate that the FPSIMD
+ * Here, we just set the correct metadata to indicate whether the FPSIMD
  * state in the cpu regs (if any) belongs to current on the host.
  *
  * TIF_SVE is backed up here, since it may get clobbered with guest state.
@@ -63,15 +54,46 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
 	BUG_ON(!current->mm);
 
 	vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
+			      KVM_ARM64_FP_HOST |
 			      KVM_ARM64_HOST_SVE_IN_USE |
 			      KVM_ARM64_HOST_SVE_ENABLED);
+
+	if (!system_supports_fpsimd())
+		return;
+
+	/*
+	 * Having just come from the user task, if any FP state is loaded it
+	 * will be that of the task. Make a note of this but, just before
+	 * entering the vcpu, it will be double checked that the loaded FP
+	 * state isn't transient because things could change between now and
+	 * then.
+	 */
 	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
 
-	if (test_thread_flag(TIF_SVE))
-		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
+	if (system_supports_sve()) {
+		if (test_thread_flag(TIF_SVE))
+			vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
 
-	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
-		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
+		if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
+			vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
+	}
+}
+
+void kvm_arch_vcpu_enter_ctxsync_fp(struct kvm_vcpu *vcpu)
+{
+	WARN_ON_ONCE(!irqs_disabled());
+
+	if (!system_supports_fpsimd())
+		return;
+
+	/*
+	 * If the CPU's FP state is transient, there is no need to save the
+	 * current state. Without further information, it must also be assumed
+	 * that the vcpu's state is not loaded.
+	 */
+	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
+		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
+				      KVM_ARM64_FP_HOST);
 }
 
 /*
@@ -80,7 +102,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
  * so that they will be written back if the kernel clobbers them due to
  * kernel-mode NEON before re-entry into the guest.
  */
-void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
+void kvm_arch_vcpu_exit_ctxsync_fp(struct kvm_vcpu *vcpu)
 {
 	WARN_ON_ONCE(!irqs_disabled());
 
@@ -106,6 +128,9 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
 	bool host_has_sve = system_supports_sve();
 	bool guest_has_sve = vcpu_has_sve(vcpu);
 
+	if (!system_supports_fpsimd())
+		return;
+
 	local_irq_save(flags);
 
 	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 0511af14dc81..65cde758abad 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -25,28 +25,9 @@
 #include <asm/fpsimd.h>
 #include <asm/debug-monitors.h>
 #include <asm/processor.h>
-#include <asm/thread_info.h>
 
 extern const char __hyp_panic_string[];
 
-/* Check whether the FP regs were dirtied while in the host-side run loop: */
-static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
-{
-	/*
-	 * When the system doesn't support FP/SIMD, we cannot rely on
-	 * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an
-	 * abort on the very first access to FP and thus we should never
-	 * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always
-	 * trap the accesses.
-	 */
-	if (!system_supports_fpsimd() ||
-	    vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE)
-		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
-				      KVM_ARM64_FP_HOST);
-
-	return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED);
-}
-
 /* Save the 32-bit only FPSIMD system register state */
 static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 341be2f2f312..3b7306003917 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -25,7 +25,6 @@
 #include <asm/fpsimd.h>
 #include <asm/debug-monitors.h>
 #include <asm/processor.h>
-#include <asm/thread_info.h>
 
 static void __activate_traps(struct kvm_vcpu *vcpu)
 {
@@ -36,7 +35,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val = CPTR_EL2_DEFAULT;
 	val |= CPTR_EL2_TTA | CPTR_EL2_TZ | CPTR_EL2_TAM;
-	if (!update_fp_enabled(vcpu)) {
+	if (!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED)) {
 		val |= CPTR_EL2_TFP;
 		__activate_traps_fpsimd32(vcpu);
 	}
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index c52d714e0d75..0c08c9123ce5 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -24,7 +24,6 @@
 #include <asm/fpsimd.h>
 #include <asm/debug-monitors.h>
 #include <asm/processor.h>
-#include <asm/thread_info.h>
 
 const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n";
 
@@ -49,7 +48,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 
 	val |= CPTR_EL2_TAM;
 
-	if (update_fp_enabled(vcpu)) {
+	if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
 		if (vcpu_has_sve(vcpu))
 			val |= CPACR_EL1_ZEN;
 	} else {
-- 
2.27.0.383.g050319c2ae-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/2] KVM: arm64: Leave vcpu FPSIMD synchronization in host
  2020-07-10  9:57 ` [PATCH 2/2] KVM: arm64: Leave vcpu FPSIMD synchronization in host Andrew Scull
@ 2020-07-13 16:04   ` Dave Martin
  2020-07-13 20:42     ` Andrew Scull
  0 siblings, 1 reply; 6+ messages in thread
From: Dave Martin @ 2020-07-13 16:04 UTC (permalink / raw)
  To: Andrew Scull; +Cc: maz, kernel-team, kvmarm

On Fri, Jul 10, 2020 at 10:57:54AM +0100, Andrew Scull wrote:
> The task state can be checked by the host and the vcpu flags updated
> before calling into hyp. This more neatly separates the concerns and
> removes the need to map the task flags to EL2.
> 
> Hyp acts on the state provided to it by the host and updates it when
> switching to the vcpu state.

Can this patch be split up?  We have a few overlapping changes here.

i.e., renaming and refactoring of hooks; moving some code around; and a
couple of other changes that are not directly related (noted below).

Overall this looks like a decent cleanup however.  It was always a bit
nasty to have to map the thread flags into Hyp.



Side question: currently we do fpsimd_save_and_flush_cpu_state() in
kvm_arch_vcpu_put_fp().  Can we remove the flush so that the vcpu state
lingers in the CPU regs and can be reclaimed when switching back to the
KVM thread?

This could remove some overhead when the KVM run loop is preempted by a
kernel thread and subsequently resumed without passing through userspace.

We would need to flush this state when anything else tries to change the
vcpu FP regs, which is one reason I skipped this previously: it would
require a bit of refactoring of fpsimd_flush_task_state() so that a non-
task context can also be flushed.

(This isn't directly related to this series.)



Additional minor comments below.

> 
> No functional change.
> 
> Signed-off-by: Andrew Scull <ascull@google.com>
> ---
>  arch/arm64/include/asm/kvm_host.h       |  5 +--
>  arch/arm64/kvm/arm.c                    |  4 +-
>  arch/arm64/kvm/fpsimd.c                 | 57 ++++++++++++++++++-------
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 19 ---------
>  arch/arm64/kvm/hyp/nvhe/switch.c        |  3 +-
>  arch/arm64/kvm/hyp/vhe/switch.c         |  3 +-
>  6 files changed, 48 insertions(+), 43 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index b06f24b5f443..ca1621eeb9d9 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -24,7 +24,6 @@
>  #include <asm/fpsimd.h>
>  #include <asm/kvm.h>
>  #include <asm/kvm_asm.h>
> -#include <asm/thread_info.h>
>  
>  #define __KVM_HAVE_ARCH_INTC_INITIALIZED
>  
> @@ -320,7 +319,6 @@ struct kvm_vcpu_arch {
>  	struct kvm_guest_debug_arch vcpu_debug_state;
>  	struct kvm_guest_debug_arch external_debug_state;
>  
> -	struct thread_info *host_thread_info;	/* hyp VA */
>  	struct user_fpsimd_state *host_fpsimd_state;	/* hyp VA */
>  
>  	struct {
> @@ -616,7 +614,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>  /* Guest/host FPSIMD coordination helpers */
>  int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
>  void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
> -void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu);
> +void kvm_arch_vcpu_enter_ctxsync_fp(struct kvm_vcpu *vcpu);
> +void kvm_arch_vcpu_exit_ctxsync_fp(struct kvm_vcpu *vcpu);

I find these names a bit confusing.

Maybe

	kvm_arch_vcpu_ctxsync_fp_before_guest_enter()
	kvm_arch_vcpu_ctxsync_fp_after_guest_exit()

(we could probably drop the "ctx" to make these slightly shorter).

>  void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu);
>  
>  static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr)
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 98f05bdac3c1..c7a711ca840e 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -682,6 +682,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  
>  		local_irq_disable();
>  
> +		kvm_arch_vcpu_enter_ctxsync_fp(vcpu);
> +
>  		kvm_vgic_flush_hwstate(vcpu);
>  
>  		/*
> @@ -769,7 +771,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  		if (static_branch_unlikely(&userspace_irqchip_in_use))
>  			kvm_timer_sync_user(vcpu);
>  
> -		kvm_arch_vcpu_ctxsync_fp(vcpu);
> +		kvm_arch_vcpu_exit_ctxsync_fp(vcpu);
>  
>  		/*
>  		 * We may have taken a host interrupt in HYP mode (ie
> diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> index 3e081d556e81..aec91f43df62 100644
> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -27,22 +27,13 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
>  {
>  	int ret;
>  
> -	struct thread_info *ti = &current->thread_info;
>  	struct user_fpsimd_state *fpsimd = &current->thread.uw.fpsimd_state;
>  
> -	/*
> -	 * Make sure the host task thread flags and fpsimd state are
> -	 * visible to hyp:
> -	 */
> -	ret = create_hyp_mappings(ti, ti + 1, PAGE_HYP);
> -	if (ret)
> -		goto error;
> -
> +	/* Make sure the host task fpsimd state is visible to hyp: */
>  	ret = create_hyp_mappings(fpsimd, fpsimd + 1, PAGE_HYP);
>  	if (ret)
>  		goto error;
>  
> -	vcpu->arch.host_thread_info = kern_hyp_va(ti);
>  	vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd);
>  error:
>  	return ret;
> @@ -52,7 +43,7 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
>   * Prepare vcpu for saving the host's FPSIMD state and loading the guest's.
>   * The actual loading is done by the FPSIMD access trap taken to hyp.
>   *
> - * Here, we just set the correct metadata to indicate that the FPSIMD
> + * Here, we just set the correct metadata to indicate whether the FPSIMD
>   * state in the cpu regs (if any) belongs to current on the host.
>   *
>   * TIF_SVE is backed up here, since it may get clobbered with guest state.
> @@ -63,15 +54,46 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>  	BUG_ON(!current->mm);
>  
>  	vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
> +			      KVM_ARM64_FP_HOST |
>  			      KVM_ARM64_HOST_SVE_IN_USE |
>  			      KVM_ARM64_HOST_SVE_ENABLED);
> +
> +	if (!system_supports_fpsimd())
> +		return;
> +
> +	/*
> +	 * Having just come from the user task, if any FP state is loaded it
> +	 * will be that of the task. Make a note of this but, just before
> +	 * entering the vcpu, it will be double checked that the loaded FP
> +	 * state isn't transient because things could change between now and
> +	 * then.
> +	 */
>  	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
>  
> -	if (test_thread_flag(TIF_SVE))
> -		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
> +	if (system_supports_sve()) {

This is a change and should be noted.

Looks reasonable, though.

To ensure that we aren't breaking assumptions here, double-check that we
also have system_supports_sve() in the appropriate places in the Hyp
code.   We almost certainly do, but it doesn't hurt to review it.

> +		if (test_thread_flag(TIF_SVE))
> +			vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
>  
> -	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
> -		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
> +		if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
> +			vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
> +	}
> +}
> +
> +void kvm_arch_vcpu_enter_ctxsync_fp(struct kvm_vcpu *vcpu)
> +{
> +	WARN_ON_ONCE(!irqs_disabled());
> +
> +	if (!system_supports_fpsimd())
> +		return;
> +
> +	/*
> +	 * If the CPU's FP state is transient, there is no need to save the
> +	 * current state. Without further information, it must also be assumed
> +	 * that the vcpu's state is not loaded.
> +	 */
> +	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> +		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
> +				      KVM_ARM64_FP_HOST);
>  }
>  
>  /*
> @@ -80,7 +102,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>   * so that they will be written back if the kernel clobbers them due to
>   * kernel-mode NEON before re-entry into the guest.
>   */
> -void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
> +void kvm_arch_vcpu_exit_ctxsync_fp(struct kvm_vcpu *vcpu)
>  {
>  	WARN_ON_ONCE(!irqs_disabled());
>  
> @@ -106,6 +128,9 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
>  	bool host_has_sve = system_supports_sve();
>  	bool guest_has_sve = vcpu_has_sve(vcpu);
>  
> +	if (!system_supports_fpsimd())
> +		return;
> +

Is this a bugfix, an optimisation, or what?

[...]

Cheers
---Dave
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/2] KVM: arm64: Leave vcpu FPSIMD synchronization in host
  2020-07-13 16:04   ` Dave Martin
@ 2020-07-13 20:42     ` Andrew Scull
  2020-07-14 16:17       ` Dave Martin
  0 siblings, 1 reply; 6+ messages in thread
From: Andrew Scull @ 2020-07-13 20:42 UTC (permalink / raw)
  To: Dave Martin; +Cc: maz, kernel-team, kvmarm

On Mon, Jul 13, 2020 at 05:04:21PM +0100, Dave Martin wrote:
> On Fri, Jul 10, 2020 at 10:57:54AM +0100, Andrew Scull wrote:
> > The task state can be checked by the host and the vcpu flags updated
> > before calling into hyp. This more neatly separates the concerns and
> > removes the need to map the task flags to EL2.
> > 
> > Hyp acts on the state provided to it by the host and updates it when
> > switching to the vcpu state.
> 
> Can this patch be split up?  We have a few overlapping changes here.
> 
> i.e., renaming and refactoring of hooks; moving some code around; and a
> couple of other changes that are not directly related (noted below).

Indeed it can, into at least 3.

> Overall this looks like a decent cleanup however.  It was always a bit
> nasty to have to map the thread flags into Hyp.

Glad to hear, I'll have to get it in a better shape.

> Side question: currently we do fpsimd_save_and_flush_cpu_state() in
> kvm_arch_vcpu_put_fp().  Can we remove the flush so that the vcpu state
> lingers in the CPU regs and can be reclaimed when switching back to the
> KVM thread?
> 
> This could remove some overhead when the KVM run loop is preempted by a
> kernel thread and subsequently resumed without passing through userspace.
> 
> We would need to flush this state when anything else tries to change the
> vcpu FP regs, which is one reason I skipped this previously: it would
> require a bit of refactoring of fpsimd_flush_task_state() so that a non-
> task context can also be flushed.
> 
> (This isn't directly related to this series.)

I don't plan to address this at the moment but I do believe there are
chances to reduce the need for saves and restores. If the flush is
removed a similar check to that done for tasks could also apply to vCPUs
i.e. if the last FPSIMD state this CPU had was the vCPU and the vCPU
last ran on this CPU then the vCPU's FPSIMD state is already loaded.

> Additional minor comments below.
> 
> > 
> > No functional change.
> > 
> > Signed-off-by: Andrew Scull <ascull@google.com>
> > ---
> >  arch/arm64/include/asm/kvm_host.h       |  5 +--
> >  arch/arm64/kvm/arm.c                    |  4 +-
> >  arch/arm64/kvm/fpsimd.c                 | 57 ++++++++++++++++++-------
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 19 ---------
> >  arch/arm64/kvm/hyp/nvhe/switch.c        |  3 +-
> >  arch/arm64/kvm/hyp/vhe/switch.c         |  3 +-
> >  6 files changed, 48 insertions(+), 43 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index b06f24b5f443..ca1621eeb9d9 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -24,7 +24,6 @@
> >  #include <asm/fpsimd.h>
> >  #include <asm/kvm.h>
> >  #include <asm/kvm_asm.h>
> > -#include <asm/thread_info.h>
> >  
> >  #define __KVM_HAVE_ARCH_INTC_INITIALIZED
> >  
> > @@ -320,7 +319,6 @@ struct kvm_vcpu_arch {
> >  	struct kvm_guest_debug_arch vcpu_debug_state;
> >  	struct kvm_guest_debug_arch external_debug_state;
> >  
> > -	struct thread_info *host_thread_info;	/* hyp VA */
> >  	struct user_fpsimd_state *host_fpsimd_state;	/* hyp VA */
> >  
> >  	struct {
> > @@ -616,7 +614,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
> >  /* Guest/host FPSIMD coordination helpers */
> >  int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
> >  void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
> > -void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu);
> > +void kvm_arch_vcpu_enter_ctxsync_fp(struct kvm_vcpu *vcpu);
> > +void kvm_arch_vcpu_exit_ctxsync_fp(struct kvm_vcpu *vcpu);
> 
> I find these names a bit confusing.
> 
> Maybe
> 
> 	kvm_arch_vcpu_ctxsync_fp_before_guest_enter()
> 	kvm_arch_vcpu_ctxsync_fp_after_guest_exit()
> 
> (we could probably drop the "ctx" to make these slightly shorter).

Changed to kvm_arch_vcpu_sync_fp_{before,after}_run()

> >  void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu);
> >  
> >  static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr)
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 98f05bdac3c1..c7a711ca840e 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -682,6 +682,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> >  
> >  		local_irq_disable();
> >  
> > +		kvm_arch_vcpu_enter_ctxsync_fp(vcpu);
> > +
> >  		kvm_vgic_flush_hwstate(vcpu);
> >  
> >  		/*
> > @@ -769,7 +771,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> >  		if (static_branch_unlikely(&userspace_irqchip_in_use))
> >  			kvm_timer_sync_user(vcpu);
> >  
> > -		kvm_arch_vcpu_ctxsync_fp(vcpu);
> > +		kvm_arch_vcpu_exit_ctxsync_fp(vcpu);
> >  
> >  		/*
> >  		 * We may have taken a host interrupt in HYP mode (ie
> > diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> > index 3e081d556e81..aec91f43df62 100644
> > --- a/arch/arm64/kvm/fpsimd.c
> > +++ b/arch/arm64/kvm/fpsimd.c
> > @@ -27,22 +27,13 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
> >  {
> >  	int ret;
> >  
> > -	struct thread_info *ti = &current->thread_info;
> >  	struct user_fpsimd_state *fpsimd = &current->thread.uw.fpsimd_state;
> >  
> > -	/*
> > -	 * Make sure the host task thread flags and fpsimd state are
> > -	 * visible to hyp:
> > -	 */
> > -	ret = create_hyp_mappings(ti, ti + 1, PAGE_HYP);
> > -	if (ret)
> > -		goto error;
> > -
> > +	/* Make sure the host task fpsimd state is visible to hyp: */
> >  	ret = create_hyp_mappings(fpsimd, fpsimd + 1, PAGE_HYP);
> >  	if (ret)
> >  		goto error;
> >  
> > -	vcpu->arch.host_thread_info = kern_hyp_va(ti);
> >  	vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd);
> >  error:
> >  	return ret;
> > @@ -52,7 +43,7 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
> >   * Prepare vcpu for saving the host's FPSIMD state and loading the guest's.
> >   * The actual loading is done by the FPSIMD access trap taken to hyp.
> >   *
> > - * Here, we just set the correct metadata to indicate that the FPSIMD
> > + * Here, we just set the correct metadata to indicate whether the FPSIMD
> >   * state in the cpu regs (if any) belongs to current on the host.
> >   *
> >   * TIF_SVE is backed up here, since it may get clobbered with guest state.
> > @@ -63,15 +54,46 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
> >  	BUG_ON(!current->mm);
> >  
> >  	vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
> > +			      KVM_ARM64_FP_HOST |
> >  			      KVM_ARM64_HOST_SVE_IN_USE |
> >  			      KVM_ARM64_HOST_SVE_ENABLED);
> > +
> > +	if (!system_supports_fpsimd())
> > +		return;
> > +
> > +	/*
> > +	 * Having just come from the user task, if any FP state is loaded it
> > +	 * will be that of the task. Make a note of this but, just before
> > +	 * entering the vcpu, it will be double checked that the loaded FP
> > +	 * state isn't transient because things could change between now and
> > +	 * then.
> > +	 */
> >  	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
> >  
> > -	if (test_thread_flag(TIF_SVE))
> > -		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
> > +	if (system_supports_sve()) {
> 
> This is a change and should be noted.
 
Ack

> Looks reasonable, though.
> 
> To ensure that we aren't breaking assumptions here, double-check that we
> also have system_supports_sve() in the appropriate places in the Hyp
> code.   We almost certainly do, but it doesn't hurt to review it.

Yes, hyp gates the fp trap handling on system_supports_fpsimd() and
further gates SVE on system_supports_sve().

> > +		if (test_thread_flag(TIF_SVE))
> > +			vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
> >  
> > -	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
> > -		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
> > +		if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
> > +			vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
> > +	}
> > +}
> > +
> > +void kvm_arch_vcpu_enter_ctxsync_fp(struct kvm_vcpu *vcpu)
> > +{
> > +	WARN_ON_ONCE(!irqs_disabled());
> > +
> > +	if (!system_supports_fpsimd())
> > +		return;
> > +
> > +	/*
> > +	 * If the CPU's FP state is transient, there is no need to save the
> > +	 * current state. Without further information, it must also be assumed
> > +	 * that the vcpu's state is not loaded.
> > +	 */
> > +	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> > +		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
> > +				      KVM_ARM64_FP_HOST);
> >  }
> >  
> >  /*
> > @@ -80,7 +102,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
> >   * so that they will be written back if the kernel clobbers them due to
> >   * kernel-mode NEON before re-entry into the guest.
> >   */
> > -void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
> > +void kvm_arch_vcpu_exit_ctxsync_fp(struct kvm_vcpu *vcpu)
> >  {
> >  	WARN_ON_ONCE(!irqs_disabled());
> >  
> > @@ -106,6 +128,9 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
> >  	bool host_has_sve = system_supports_sve();
> >  	bool guest_has_sve = vcpu_has_sve(vcpu);
> >  
> > +	if (!system_supports_fpsimd())
> > +		return;
> > +
> 
> Is this a bugfix, an optimisation, or what?

This was concern over the comment in the hyp switching code that said
something along the lines of some flags can't be truested if FPSIMD
isn't supported. However, I'm convinced that FP_ENABLED will not be set
if !system_supports_fpsimd() so you're right to question this.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/2] KVM: arm64: Leave vcpu FPSIMD synchronization in host
  2020-07-13 20:42     ` Andrew Scull
@ 2020-07-14 16:17       ` Dave Martin
  0 siblings, 0 replies; 6+ messages in thread
From: Dave Martin @ 2020-07-14 16:17 UTC (permalink / raw)
  To: Andrew Scull; +Cc: maz, kernel-team, kvmarm

On Mon, Jul 13, 2020 at 09:42:04PM +0100, Andrew Scull wrote:
> On Mon, Jul 13, 2020 at 05:04:21PM +0100, Dave Martin wrote:
> > On Fri, Jul 10, 2020 at 10:57:54AM +0100, Andrew Scull wrote:
> > > The task state can be checked by the host and the vcpu flags updated
> > > before calling into hyp. This more neatly separates the concerns and
> > > removes the need to map the task flags to EL2.
> > > 
> > > Hyp acts on the state provided to it by the host and updates it when
> > > switching to the vcpu state.
> > 
> > Can this patch be split up?  We have a few overlapping changes here.
> > 
> > i.e., renaming and refactoring of hooks; moving some code around; and a
> > couple of other changes that are not directly related (noted below).
> 
> Indeed it can, into at least 3.
> 
> > Overall this looks like a decent cleanup however.  It was always a bit
> > nasty to have to map the thread flags into Hyp.
> 
> Glad to hear, I'll have to get it in a better shape.
> 
> > Side question: currently we do fpsimd_save_and_flush_cpu_state() in
> > kvm_arch_vcpu_put_fp().  Can we remove the flush so that the vcpu state
> > lingers in the CPU regs and can be reclaimed when switching back to the
> > KVM thread?
> > 
> > This could remove some overhead when the KVM run loop is preempted by a
> > kernel thread and subsequently resumed without passing through userspace.
> > 
> > We would need to flush this state when anything else tries to change the
> > vcpu FP regs, which is one reason I skipped this previously: it would
> > require a bit of refactoring of fpsimd_flush_task_state() so that a non-
> > task context can also be flushed.
> > 
> > (This isn't directly related to this series.)
> 
> I don't plan to address this at the moment but I do believe there are
> chances to reduce the need for saves and restores. If the flush is
> removed a similar check to that done for tasks could also apply to vCPUs
> i.e. if the last FPSIMD state this CPU had was the vCPU and the vCPU
> last ran on this CPU then the vCPU's FPSIMD state is already loaded.

Sounds reasonable.

As you observe, this is mostly a case of refactoring the code a bit and
making the vcpu context slightly less of a special case.

(And if you don't do it, no worries -- I just couldn't resist getting it
done "for free" ;)

> 
> > Additional minor comments below.
> > 
> > > 
> > > No functional change.
> > > 
> > > Signed-off-by: Andrew Scull <ascull@google.com>
> > > ---
> > >  arch/arm64/include/asm/kvm_host.h       |  5 +--
> > >  arch/arm64/kvm/arm.c                    |  4 +-
> > >  arch/arm64/kvm/fpsimd.c                 | 57 ++++++++++++++++++-------
> > >  arch/arm64/kvm/hyp/include/hyp/switch.h | 19 ---------
> > >  arch/arm64/kvm/hyp/nvhe/switch.c        |  3 +-
> > >  arch/arm64/kvm/hyp/vhe/switch.c         |  3 +-
> > >  6 files changed, 48 insertions(+), 43 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > index b06f24b5f443..ca1621eeb9d9 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -24,7 +24,6 @@
> > >  #include <asm/fpsimd.h>
> > >  #include <asm/kvm.h>
> > >  #include <asm/kvm_asm.h>
> > > -#include <asm/thread_info.h>
> > >  
> > >  #define __KVM_HAVE_ARCH_INTC_INITIALIZED
> > >  
> > > @@ -320,7 +319,6 @@ struct kvm_vcpu_arch {
> > >  	struct kvm_guest_debug_arch vcpu_debug_state;
> > >  	struct kvm_guest_debug_arch external_debug_state;
> > >  
> > > -	struct thread_info *host_thread_info;	/* hyp VA */
> > >  	struct user_fpsimd_state *host_fpsimd_state;	/* hyp VA */
> > >  
> > >  	struct {
> > > @@ -616,7 +614,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
> > >  /* Guest/host FPSIMD coordination helpers */
> > >  int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
> > >  void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
> > > -void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu);
> > > +void kvm_arch_vcpu_enter_ctxsync_fp(struct kvm_vcpu *vcpu);
> > > +void kvm_arch_vcpu_exit_ctxsync_fp(struct kvm_vcpu *vcpu);
> > 
> > I find these names a bit confusing.
> > 
> > Maybe
> > 
> > 	kvm_arch_vcpu_ctxsync_fp_before_guest_enter()
> > 	kvm_arch_vcpu_ctxsync_fp_after_guest_exit()
> > 
> > (we could probably drop the "ctx" to make these slightly shorter).
> 
> Changed to kvm_arch_vcpu_sync_fp_{before,after}_run()

Fair enough.

> > >  void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu);
> > >  
> > >  static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr)
> > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > index 98f05bdac3c1..c7a711ca840e 100644
> > > --- a/arch/arm64/kvm/arm.c
> > > +++ b/arch/arm64/kvm/arm.c
> > > @@ -682,6 +682,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> > >  
> > >  		local_irq_disable();
> > >  
> > > +		kvm_arch_vcpu_enter_ctxsync_fp(vcpu);
> > > +
> > >  		kvm_vgic_flush_hwstate(vcpu);
> > >  
> > >  		/*
> > > @@ -769,7 +771,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> > >  		if (static_branch_unlikely(&userspace_irqchip_in_use))
> > >  			kvm_timer_sync_user(vcpu);
> > >  
> > > -		kvm_arch_vcpu_ctxsync_fp(vcpu);
> > > +		kvm_arch_vcpu_exit_ctxsync_fp(vcpu);
> > >  
> > >  		/*
> > >  		 * We may have taken a host interrupt in HYP mode (ie
> > > diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> > > index 3e081d556e81..aec91f43df62 100644
> > > --- a/arch/arm64/kvm/fpsimd.c
> > > +++ b/arch/arm64/kvm/fpsimd.c
> > > @@ -27,22 +27,13 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
> > >  {
> > >  	int ret;
> > >  
> > > -	struct thread_info *ti = &current->thread_info;
> > >  	struct user_fpsimd_state *fpsimd = &current->thread.uw.fpsimd_state;
> > >  
> > > -	/*
> > > -	 * Make sure the host task thread flags and fpsimd state are
> > > -	 * visible to hyp:
> > > -	 */
> > > -	ret = create_hyp_mappings(ti, ti + 1, PAGE_HYP);
> > > -	if (ret)
> > > -		goto error;
> > > -
> > > +	/* Make sure the host task fpsimd state is visible to hyp: */
> > >  	ret = create_hyp_mappings(fpsimd, fpsimd + 1, PAGE_HYP);
> > >  	if (ret)
> > >  		goto error;
> > >  
> > > -	vcpu->arch.host_thread_info = kern_hyp_va(ti);
> > >  	vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd);
> > >  error:
> > >  	return ret;
> > > @@ -52,7 +43,7 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
> > >   * Prepare vcpu for saving the host's FPSIMD state and loading the guest's.
> > >   * The actual loading is done by the FPSIMD access trap taken to hyp.
> > >   *
> > > - * Here, we just set the correct metadata to indicate that the FPSIMD
> > > + * Here, we just set the correct metadata to indicate whether the FPSIMD
> > >   * state in the cpu regs (if any) belongs to current on the host.
> > >   *
> > >   * TIF_SVE is backed up here, since it may get clobbered with guest state.
> > > @@ -63,15 +54,46 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
> > >  	BUG_ON(!current->mm);
> > >  
> > >  	vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
> > > +			      KVM_ARM64_FP_HOST |
> > >  			      KVM_ARM64_HOST_SVE_IN_USE |
> > >  			      KVM_ARM64_HOST_SVE_ENABLED);
> > > +
> > > +	if (!system_supports_fpsimd())
> > > +		return;
> > > +
> > > +	/*
> > > +	 * Having just come from the user task, if any FP state is loaded it
> > > +	 * will be that of the task. Make a note of this but, just before
> > > +	 * entering the vcpu, it will be double checked that the loaded FP
> > > +	 * state isn't transient because things could change between now and
> > > +	 * then.
> > > +	 */
> > >  	vcpu->arch.flags |= KVM_ARM64_FP_HOST;
> > >  
> > > -	if (test_thread_flag(TIF_SVE))
> > > -		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
> > > +	if (system_supports_sve()) {
> > 
> > This is a change and should be noted.
>  
> Ack
> 
> > Looks reasonable, though.
> > 
> > To ensure that we aren't breaking assumptions here, double-check that we
> > also have system_supports_sve() in the appropriate places in the Hyp
> > code.   We almost certainly do, but it doesn't hurt to review it.
> 
> Yes, hyp gates the fp trap handling on system_supports_fpsimd() and
> further gates SVE on system_supports_sve().

OK, that's reassuring.

> > > +		if (test_thread_flag(TIF_SVE))
> > > +			vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
> > >  
> > > -	if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
> > > -		vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
> > > +		if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
> > > +			vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
> > > +	}
> > > +}
> > > +
> > > +void kvm_arch_vcpu_enter_ctxsync_fp(struct kvm_vcpu *vcpu)
> > > +{
> > > +	WARN_ON_ONCE(!irqs_disabled());
> > > +
> > > +	if (!system_supports_fpsimd())
> > > +		return;
> > > +
> > > +	/*
> > > +	 * If the CPU's FP state is transient, there is no need to save the
> > > +	 * current state. Without further information, it must also be assumed
> > > +	 * that the vcpu's state is not loaded.
> > > +	 */
> > > +	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> > > +		vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
> > > +				      KVM_ARM64_FP_HOST);
> > >  }
> > >  
> > >  /*
> > > @@ -80,7 +102,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
> > >   * so that they will be written back if the kernel clobbers them due to
> > >   * kernel-mode NEON before re-entry into the guest.
> > >   */
> > > -void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
> > > +void kvm_arch_vcpu_exit_ctxsync_fp(struct kvm_vcpu *vcpu)
> > >  {
> > >  	WARN_ON_ONCE(!irqs_disabled());
> > >  
> > > @@ -106,6 +128,9 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
> > >  	bool host_has_sve = system_supports_sve();
> > >  	bool guest_has_sve = vcpu_has_sve(vcpu);
> > >  
> > > +	if (!system_supports_fpsimd())
> > > +		return;
> > > +
> > 
> > Is this a bugfix, an optimisation, or what?
> 
> This was concern over the comment in the hyp switching code that said
> something along the lines of some flags can't be truested if FPSIMD
> isn't supported. However, I'm convinced that FP_ENABLED will not be set
> if !system_supports_fpsimd() so you're right to question this.

If we can satisfy ourselves that the important flags never get set, it
might be better to drop it.

If there's significant doubt, then I wonder whether it would be worth
adding a WARN() instead.

Cheers
---Dave
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-07-14 16:18 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-10  9:57 [PATCH 0/2] Manage vcpu flags from the host Andrew Scull
2020-07-10  9:57 ` [PATCH 1/2] KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to " Andrew Scull
2020-07-10  9:57 ` [PATCH 2/2] KVM: arm64: Leave vcpu FPSIMD synchronization in host Andrew Scull
2020-07-13 16:04   ` Dave Martin
2020-07-13 20:42     ` Andrew Scull
2020-07-14 16:17       ` Dave Martin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.