* [PATCH] kvm: arm/arm64: fold VHE entry/exit work into kvm_vcpu_run_vhe()
@ 2020-02-10 11:47 Mark Rutland
2020-02-17 14:41 ` Marc Zyngier
0 siblings, 1 reply; 2+ messages in thread
From: Mark Rutland @ 2020-02-10 11:47 UTC (permalink / raw)
To: maz; +Cc: Catalin Marinas, linux-arm-kernel, Will Deacon, kvmarm
With VHE, running a vCPU always requires the sequence:
1. kvm_arm_vhe_guest_enter();
2. kvm_vcpu_run_vhe();
3. kvm_arm_vhe_guest_exit()
... and as we invoke this from the shared arm/arm64 KVM code, 32-bit arm
has to provide stubs for all three functions.
To simplify the common code, and make it easier to make further
modifications to the arm64-specific portions in the near future, let's
fold kvm_arm_vhe_guest_enter() and kvm_arm_vhe_guest_exit() into
kvm_vcpu_run_vhe().
The 32-bit stubs for kvm_arm_vhe_guest_enter() and
kvm_arm_vhe_guest_exit() are removed, as they are no longer used. The
32-bit stub for kvm_vcpu_run_vhe() is left as-is.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: kvmarm@lists.cs.columbia.edu
---
arch/arm/include/asm/kvm_host.h | 3 ---
arch/arm64/include/asm/kvm_host.h | 32 --------------------------------
arch/arm64/kvm/hyp/switch.c | 39 +++++++++++++++++++++++++++++++++++++--
virt/kvm/arm/arm.c | 2 --
4 files changed, 37 insertions(+), 39 deletions(-)
Hi Marc,
This is a preparatory patch for the arm64 entry rework I'm doing: I need to
move the DAIF manipulation into the same function so that I can replace that
with a common DAIF+PMR save/restore sequence.
If it's possible to queue this patch for v5.6-rc*, it would help to minimize
conflict in the v5.7 merge window, and would be much appreciated.
I've tested this on a ThunderX2 machine atop v5.6-rc1 defconfig, booting the
host and running a test VM.
Thanks,
Mark.
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index c3314b286a61..a827b4d60d38 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -392,9 +392,6 @@ static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {}
static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
-static inline void kvm_arm_vhe_guest_enter(void) {}
-static inline void kvm_arm_vhe_guest_exit(void) {}
-
#define KVM_BP_HARDEN_UNKNOWN -1
#define KVM_BP_HARDEN_WA_NEEDED 0
#define KVM_BP_HARDEN_NOT_REQUIRED 1
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index d87aa609d2b6..57fd46acd058 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -626,38 +626,6 @@ static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {}
static inline void kvm_clr_pmu_events(u32 clr) {}
#endif
-static inline void kvm_arm_vhe_guest_enter(void)
-{
- local_daif_mask();
-
- /*
- * Having IRQs masked via PMR when entering the guest means the GIC
- * will not signal the CPU of interrupts of lower priority, and the
- * only way to get out will be via guest exceptions.
- * Naturally, we want to avoid this.
- *
- * local_daif_mask() already sets GIC_PRIO_PSR_I_SET, we just need a
- * dsb to ensure the redistributor is forwards EL2 IRQs to the CPU.
- */
- pmr_sync();
-}
-
-static inline void kvm_arm_vhe_guest_exit(void)
-{
- /*
- * local_daif_restore() takes care to properly restore PSTATE.DAIF
- * and the GIC PMR if the host is using IRQ priorities.
- */
- local_daif_restore(DAIF_PROCCTX_NOIRQ);
-
- /*
- * When we exit from the guest we change a number of CPU configuration
- * parameters, such as traps. Make sure these changes take effect
- * before running the host or additional guests.
- */
- isb();
-}
-
#define KVM_BP_HARDEN_UNKNOWN -1
#define KVM_BP_HARDEN_WA_NEEDED 0
#define KVM_BP_HARDEN_NOT_REQUIRED 1
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index dfe8dd172512..925086b46136 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -625,7 +625,7 @@ static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
}
/* Switch to the guest for VHE systems running in EL2 */
-int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
+static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *host_ctxt;
struct kvm_cpu_context *guest_ctxt;
@@ -678,7 +678,42 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
return exit_code;
}
-NOKPROBE_SYMBOL(kvm_vcpu_run_vhe);
+NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe);
+
+int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
+{
+ int ret;
+
+ local_daif_mask();
+
+ /*
+ * Having IRQs masked via PMR when entering the guest means the GIC
+ * will not signal the CPU of interrupts of lower priority, and the
+ * only way to get out will be via guest exceptions.
+ * Naturally, we want to avoid this.
+ *
+ * local_daif_mask() already sets GIC_PRIO_PSR_I_SET, we just need a
+ * dsb to ensure the redistributor is forwards EL2 IRQs to the CPU.
+ */
+ pmr_sync();
+
+ ret = __kvm_vcpu_run_vhe(vcpu);
+
+ /*
+ * local_daif_restore() takes care to properly restore PSTATE.DAIF
+ * and the GIC PMR if the host is using IRQ priorities.
+ */
+ local_daif_restore(DAIF_PROCCTX_NOIRQ);
+
+ /*
+ * When we exit from the guest we change a number of CPU configuration
+ * parameters, such as traps. Make sure these changes take effect
+ * before running the host or additional guests.
+ */
+ isb();
+
+ return ret;
+}
/* Switch to the guest for legacy non-VHE systems */
int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index d65a0faa46d8..eda7b624eab8 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -742,9 +742,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
guest_enter_irqoff();
if (has_vhe()) {
- kvm_arm_vhe_guest_enter();
ret = kvm_vcpu_run_vhe(vcpu);
- kvm_arm_vhe_guest_exit();
} else {
ret = kvm_call_hyp_ret(__kvm_vcpu_run_nvhe, vcpu);
}
--
2.11.0
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH] kvm: arm/arm64: fold VHE entry/exit work into kvm_vcpu_run_vhe()
2020-02-10 11:47 [PATCH] kvm: arm/arm64: fold VHE entry/exit work into kvm_vcpu_run_vhe() Mark Rutland
@ 2020-02-17 14:41 ` Marc Zyngier
0 siblings, 0 replies; 2+ messages in thread
From: Marc Zyngier @ 2020-02-17 14:41 UTC (permalink / raw)
To: Mark Rutland; +Cc: Catalin Marinas, linux-arm-kernel, Will Deacon, kvmarm
Hi Mark,
On 2020-02-10 11:47, Mark Rutland wrote:
> With VHE, running a vCPU always requires the sequence:
>
> 1. kvm_arm_vhe_guest_enter();
> 2. kvm_vcpu_run_vhe();
> 3. kvm_arm_vhe_guest_exit()
>
> ... and as we invoke this from the shared arm/arm64 KVM code, 32-bit
> arm
> has to provide stubs for all three functions.
>
> To simplify the common code, and make it easier to make further
> modifications to the arm64-specific portions in the near future, let's
> fold kvm_arm_vhe_guest_enter() and kvm_arm_vhe_guest_exit() into
> kvm_vcpu_run_vhe().
>
> The 32-bit stubs for kvm_arm_vhe_guest_enter() and
> kvm_arm_vhe_guest_exit() are removed, as they are no longer used. The
> 32-bit stub for kvm_vcpu_run_vhe() is left as-is.
>
> There should be no functional change as a result of this patch.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: kvmarm@lists.cs.columbia.edu
> ---
> arch/arm/include/asm/kvm_host.h | 3 ---
> arch/arm64/include/asm/kvm_host.h | 32
> --------------------------------
> arch/arm64/kvm/hyp/switch.c | 39
> +++++++++++++++++++++++++++++++++++++--
> virt/kvm/arm/arm.c | 2 --
> 4 files changed, 37 insertions(+), 39 deletions(-)
>
> Hi Marc,
>
> This is a preparatory patch for the arm64 entry rework I'm doing: I
> need to
> move the DAIF manipulation into the same function so that I can replace
> that
> with a common DAIF+PMR save/restore sequence.
>
> If it's possible to queue this patch for v5.6-rc*, it would help to
> minimize
> conflict in the v5.7 merge window, and would be much appreciated.
>
> I've tested this on a ThunderX2 machine atop v5.6-rc1 defconfig,
> booting the
> host and running a test VM.
>
> Thanks,
> Mark.
>
> diff --git a/arch/arm/include/asm/kvm_host.h
> b/arch/arm/include/asm/kvm_host.h
> index c3314b286a61..a827b4d60d38 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -392,9 +392,6 @@ static inline void kvm_arch_vcpu_put_fp(struct
> kvm_vcpu *vcpu) {}
> static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu)
> {}
> static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
>
> -static inline void kvm_arm_vhe_guest_enter(void) {}
> -static inline void kvm_arm_vhe_guest_exit(void) {}
> -
> #define KVM_BP_HARDEN_UNKNOWN -1
> #define KVM_BP_HARDEN_WA_NEEDED 0
> #define KVM_BP_HARDEN_NOT_REQUIRED 1
> diff --git a/arch/arm64/include/asm/kvm_host.h
> b/arch/arm64/include/asm/kvm_host.h
> index d87aa609d2b6..57fd46acd058 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -626,38 +626,6 @@ static inline void kvm_set_pmu_events(u32 set,
> struct perf_event_attr *attr) {}
> static inline void kvm_clr_pmu_events(u32 clr) {}
> #endif
>
> -static inline void kvm_arm_vhe_guest_enter(void)
> -{
> - local_daif_mask();
> -
> - /*
> - * Having IRQs masked via PMR when entering the guest means the GIC
> - * will not signal the CPU of interrupts of lower priority, and the
> - * only way to get out will be via guest exceptions.
> - * Naturally, we want to avoid this.
> - *
> - * local_daif_mask() already sets GIC_PRIO_PSR_I_SET, we just need a
> - * dsb to ensure the redistributor is forwards EL2 IRQs to the CPU.
> - */
> - pmr_sync();
> -}
> -
> -static inline void kvm_arm_vhe_guest_exit(void)
> -{
> - /*
> - * local_daif_restore() takes care to properly restore PSTATE.DAIF
> - * and the GIC PMR if the host is using IRQ priorities.
> - */
> - local_daif_restore(DAIF_PROCCTX_NOIRQ);
> -
> - /*
> - * When we exit from the guest we change a number of CPU
> configuration
> - * parameters, such as traps. Make sure these changes take effect
> - * before running the host or additional guests.
> - */
> - isb();
> -}
> -
> #define KVM_BP_HARDEN_UNKNOWN -1
> #define KVM_BP_HARDEN_WA_NEEDED 0
> #define KVM_BP_HARDEN_NOT_REQUIRED 1
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index dfe8dd172512..925086b46136 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -625,7 +625,7 @@ static void __hyp_text __pmu_switch_to_host(struct
> kvm_cpu_context *host_ctxt)
> }
>
> /* Switch to the guest for VHE systems running in EL2 */
> -int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
> +static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
> {
> struct kvm_cpu_context *host_ctxt;
> struct kvm_cpu_context *guest_ctxt;
> @@ -678,7 +678,42 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>
> return exit_code;
> }
> -NOKPROBE_SYMBOL(kvm_vcpu_run_vhe);
> +NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe);
> +
> +int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
> +{
> + int ret;
> +
> + local_daif_mask();
> +
> + /*
> + * Having IRQs masked via PMR when entering the guest means the GIC
> + * will not signal the CPU of interrupts of lower priority, and the
> + * only way to get out will be via guest exceptions.
> + * Naturally, we want to avoid this.
> + *
> + * local_daif_mask() already sets GIC_PRIO_PSR_I_SET, we just need a
> + * dsb to ensure the redistributor is forwards EL2 IRQs to the CPU.
> + */
> + pmr_sync();
> +
> + ret = __kvm_vcpu_run_vhe(vcpu);
> +
> + /*
> + * local_daif_restore() takes care to properly restore PSTATE.DAIF
> + * and the GIC PMR if the host is using IRQ priorities.
> + */
> + local_daif_restore(DAIF_PROCCTX_NOIRQ);
> +
> + /*
> + * When we exit from the guest we change a number of CPU
> configuration
> + * parameters, such as traps. Make sure these changes take effect
> + * before running the host or additional guests.
> + */
> + isb();
> +
> + return ret;
> +}
>
> /* Switch to the guest for legacy non-VHE systems */
> int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index d65a0faa46d8..eda7b624eab8 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -742,9 +742,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
> struct kvm_run *run)
> guest_enter_irqoff();
>
> if (has_vhe()) {
> - kvm_arm_vhe_guest_enter();
> ret = kvm_vcpu_run_vhe(vcpu);
> - kvm_arm_vhe_guest_exit();
> } else {
> ret = kvm_call_hyp_ret(__kvm_vcpu_run_nvhe, vcpu);
> }
Looks good to me, I've taken it as part of the (very small) set of fixes
queued for -rc3.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2020-02-17 14:41 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-10 11:47 [PATCH] kvm: arm/arm64: fold VHE entry/exit work into kvm_vcpu_run_vhe() Mark Rutland
2020-02-17 14:41 ` Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).