linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] KVM:arm64: Proposed host stage-2 improvements
@ 2021-03-22 16:48 Marc Zyngier
  2021-03-22 16:48 ` [PATCH 1/3] KVM: arm64: Constraint KVM's own __flush_dcache_area to protectected mode Marc Zyngier
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Marc Zyngier @ 2021-03-22 16:48 UTC (permalink / raw)
  To: Quentin Perret
  Cc: catalin.marinas, james.morse, julien.thierry.kdev,
	suzuki.poulose, android-kvm, seanjc, mate.toth-pal, linux-kernel,
	linux-arm-kernel, kernel-team, kvmarm, tabba, ardb, mark.rutland,
	dbrazdil

Hi all,

Since Quentin's series is pretty close to final, I though that instead
of asking for additional rework, I'd have a go at it myself. These
patches try to bring some simplifications to the cpufeature
duplication that has been introduced between EL1 and EL2.

This whole infrastructure exists for a single reason: making the
*sanitised* versions of ID_AA64MMFR{0,1}_EL1 available to EL2. On top
of that, the read_ctr macro gets in the way as it needs direct access
to arm64_ftr_reg_ctrel0 to cope with ARM64_MISMATCHED_CACHE_TYPE.

This series tackles the latest point first by taking advantage of the
fact that with pKVM enabled, late CPUs aren't allowed to boot, and
thus that we know the final CTR_EL0 value before KVM starts, no matter
whether there is a mismatch or not. We can thus specialise read_ctr to
do the right thing without requiring access to the EL1 data structure.

Once that's sorted, we can easily simplify the whole infrastructure to
only snapshot the two u64 we need before enabling the protected mode.

Tested on a Synquacer system.

	M.

Marc Zyngier (3):
  KVM: arm64: Constraint KVM's own __flush_dcache_area to protectected
    mode
  KVM: arm64: Generate final CTR_EL0 value when running in Protected
    mode
  KVM: arm64: Drop the CPU_FTR_REG_HYP_COPY infrastructure

 arch/arm64/include/asm/assembler.h      |  9 +++++++++
 arch/arm64/include/asm/cpufeature.h     |  1 -
 arch/arm64/include/asm/kvm_cpufeature.h | 26 -------------------------
 arch/arm64/include/asm/kvm_host.h       |  4 ----
 arch/arm64/include/asm/kvm_hyp.h        |  3 +++
 arch/arm64/kernel/cpufeature.c          | 13 -------------
 arch/arm64/kernel/image-vars.h          |  1 +
 arch/arm64/kvm/hyp/nvhe/cache.S         |  4 ++++
 arch/arm64/kvm/hyp/nvhe/hyp-smp.c       |  6 ++----
 arch/arm64/kvm/hyp/nvhe/mem_protect.c   |  5 ++---
 arch/arm64/kvm/sys_regs.c               | 23 ++--------------------
 arch/arm64/kvm/va_layout.c              |  7 +++++++
 12 files changed, 30 insertions(+), 72 deletions(-)
 delete mode 100644 arch/arm64/include/asm/kvm_cpufeature.h

-- 
2.29.2


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/3] KVM: arm64: Constraint KVM's own __flush_dcache_area to protectected mode
  2021-03-22 16:48 [PATCH 0/3] KVM:arm64: Proposed host stage-2 improvements Marc Zyngier
@ 2021-03-22 16:48 ` Marc Zyngier
  2021-03-22 16:48 ` [PATCH 2/3] KVM: arm64: Generate final CTR_EL0 value when running in Protected mode Marc Zyngier
  2021-03-22 16:48 ` [PATCH 3/3] KVM: arm64: Drop the CPU_FTR_REG_HYP_COPY infrastructure Marc Zyngier
  2 siblings, 0 replies; 7+ messages in thread
From: Marc Zyngier @ 2021-03-22 16:48 UTC (permalink / raw)
  To: Quentin Perret
  Cc: catalin.marinas, james.morse, julien.thierry.kdev,
	suzuki.poulose, android-kvm, seanjc, mate.toth-pal, linux-kernel,
	linux-arm-kernel, kernel-team, kvmarm, tabba, ardb, mark.rutland,
	dbrazdil

As we are about to specialise KVM's version of __flush_dcache_area
via a hack opn the read_ctr macro, make sure that we won't ever
use KVM's copy of __flush_dcache_area(), as things would otherwise
break for late arriving CPUs.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/nvhe/cache.S | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
index 36cef6915428..1c177d3ec5c6 100644
--- a/arch/arm64/kvm/hyp/nvhe/cache.S
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -6,8 +6,12 @@
 #include <linux/linkage.h>
 #include <asm/assembler.h>
 #include <asm/alternative.h>
+#include <asm/asm-bug.h>
 
 SYM_FUNC_START_PI(__flush_dcache_area)
+alternative_if_not ARM64_KVM_PROTECTED_MODE
+	ASM_BUG()
+alternative_else_nop_endif
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__flush_dcache_area)
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/3] KVM: arm64: Generate final CTR_EL0 value when running in Protected mode
  2021-03-22 16:48 [PATCH 0/3] KVM:arm64: Proposed host stage-2 improvements Marc Zyngier
  2021-03-22 16:48 ` [PATCH 1/3] KVM: arm64: Constraint KVM's own __flush_dcache_area to protectected mode Marc Zyngier
@ 2021-03-22 16:48 ` Marc Zyngier
  2021-03-22 17:40   ` Quentin Perret
  2021-03-22 16:48 ` [PATCH 3/3] KVM: arm64: Drop the CPU_FTR_REG_HYP_COPY infrastructure Marc Zyngier
  2 siblings, 1 reply; 7+ messages in thread
From: Marc Zyngier @ 2021-03-22 16:48 UTC (permalink / raw)
  To: Quentin Perret
  Cc: catalin.marinas, james.morse, julien.thierry.kdev,
	suzuki.poulose, android-kvm, seanjc, mate.toth-pal, linux-kernel,
	linux-arm-kernel, kernel-team, kvmarm, tabba, ardb, mark.rutland,
	dbrazdil

In protected mode, late CPUs are not allowed to boot (enforced by
the PSCI relay). We can thus specialise the read_ctr macro to
always return a pre-computed, sanitised value.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/assembler.h | 9 +++++++++
 arch/arm64/kernel/image-vars.h     | 1 +
 arch/arm64/kvm/va_layout.c         | 7 +++++++
 3 files changed, 17 insertions(+)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index fb651c1f26e9..1a4cee7eb3c9 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -270,12 +270,21 @@ alternative_endif
  * provide the system wide safe value from arm64_ftr_reg_ctrel0.sys_val
  */
 	.macro	read_ctr, reg
+#ifndef __KVM_NVHE_HYPERVISOR__
 alternative_if_not ARM64_MISMATCHED_CACHE_TYPE
 	mrs	\reg, ctr_el0			// read CTR
 	nop
 alternative_else
 	ldr_l	\reg, arm64_ftr_reg_ctrel0 + ARM64_FTR_SYSVAL
 alternative_endif
+#else
+alternative_cb kvm_compute_final_ctr_el0
+	movz	\reg, #0
+	movk	\reg, #0, lsl #16
+	movk	\reg, #0, lsl #32
+	movk	\reg, #0, lsl #48
+alternative_cb_end
+#endif
 	.endm
 
 
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index d5dc2b792651..fdd60cd1d7e8 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -65,6 +65,7 @@ __efistub__ctype		= _ctype;
 KVM_NVHE_ALIAS(kvm_patch_vector_branch);
 KVM_NVHE_ALIAS(kvm_update_va_mask);
 KVM_NVHE_ALIAS(kvm_get_kimage_voffset);
+KVM_NVHE_ALIAS(kvm_compute_final_ctr_el0);
 
 /* Global kernel state accessed by nVHE hyp code. */
 KVM_NVHE_ALIAS(kvm_vgic_global_state);
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index 978301392d67..acdb7b3cc97d 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -288,3 +288,10 @@ void kvm_get_kimage_voffset(struct alt_instr *alt,
 {
 	generate_mov_q(kimage_voffset, origptr, updptr, nr_inst);
 }
+
+void kvm_compute_final_ctr_el0(struct alt_instr *alt,
+			       __le32 *origptr, __le32 *updptr, int nr_inst)
+{
+	generate_mov_q(read_sanitised_ftr_reg(SYS_CTR_EL0),
+		       origptr, updptr, nr_inst);
+}
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/3] KVM: arm64: Drop the CPU_FTR_REG_HYP_COPY infrastructure
  2021-03-22 16:48 [PATCH 0/3] KVM:arm64: Proposed host stage-2 improvements Marc Zyngier
  2021-03-22 16:48 ` [PATCH 1/3] KVM: arm64: Constraint KVM's own __flush_dcache_area to protectected mode Marc Zyngier
  2021-03-22 16:48 ` [PATCH 2/3] KVM: arm64: Generate final CTR_EL0 value when running in Protected mode Marc Zyngier
@ 2021-03-22 16:48 ` Marc Zyngier
  2 siblings, 0 replies; 7+ messages in thread
From: Marc Zyngier @ 2021-03-22 16:48 UTC (permalink / raw)
  To: Quentin Perret
  Cc: catalin.marinas, james.morse, julien.thierry.kdev,
	suzuki.poulose, android-kvm, seanjc, mate.toth-pal, linux-kernel,
	linux-arm-kernel, kernel-team, kvmarm, tabba, ardb, mark.rutland,
	dbrazdil

Now that the read_ctr macro has been specialised for nVHE,
the whole CPU_FTR_REG_HYP_COPY infrastrcture looks completely
overengineered.

Simplify it by populating the two u64 quantities (MMFR0 and 1)
that the hypervisor need.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/cpufeature.h     |  1 -
 arch/arm64/include/asm/kvm_cpufeature.h | 26 -------------------------
 arch/arm64/include/asm/kvm_host.h       |  4 ----
 arch/arm64/include/asm/kvm_hyp.h        |  3 +++
 arch/arm64/kernel/cpufeature.c          | 13 -------------
 arch/arm64/kvm/hyp/nvhe/hyp-smp.c       |  6 ++----
 arch/arm64/kvm/hyp/nvhe/mem_protect.c   |  5 ++---
 arch/arm64/kvm/sys_regs.c               | 23 ++--------------------
 8 files changed, 9 insertions(+), 72 deletions(-)
 delete mode 100644 arch/arm64/include/asm/kvm_cpufeature.h

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index a85cea2cac57..61177bac49fa 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -607,7 +607,6 @@ void check_local_cpu_capabilities(void);
 
 u64 read_sanitised_ftr_reg(u32 id);
 u64 __read_sysreg_by_encoding(u32 sys_id);
-int copy_ftr_reg(u32 id, struct arm64_ftr_reg *dst);
 
 static inline bool cpu_supports_mixed_endian_el0(void)
 {
diff --git a/arch/arm64/include/asm/kvm_cpufeature.h b/arch/arm64/include/asm/kvm_cpufeature.h
deleted file mode 100644
index ff302d15e840..000000000000
--- a/arch/arm64/include/asm/kvm_cpufeature.h
+++ /dev/null
@@ -1,26 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (C) 2020 - Google LLC
- * Author: Quentin Perret <qperret@google.com>
- */
-
-#ifndef __ARM64_KVM_CPUFEATURE_H__
-#define __ARM64_KVM_CPUFEATURE_H__
-
-#include <asm/cpufeature.h>
-
-#include <linux/build_bug.h>
-
-#if defined(__KVM_NVHE_HYPERVISOR__)
-#define DECLARE_KVM_HYP_CPU_FTR_REG(name) extern struct arm64_ftr_reg name
-#define DEFINE_KVM_HYP_CPU_FTR_REG(name) struct arm64_ftr_reg name
-#else
-#define DECLARE_KVM_HYP_CPU_FTR_REG(name) extern struct arm64_ftr_reg kvm_nvhe_sym(name)
-#define DEFINE_KVM_HYP_CPU_FTR_REG(name) BUILD_BUG()
-#endif
-
-DECLARE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_ctrel0);
-DECLARE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_id_aa64mmfr0_el1);
-DECLARE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_id_aa64mmfr1_el1);
-
-#endif
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 4859c9de75d7..09979cdec28b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -740,13 +740,9 @@ void kvm_clr_pmu_events(u32 clr);
 
 void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu);
 void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
-
-void setup_kvm_el2_caps(void);
 #else
 static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {}
 static inline void kvm_clr_pmu_events(u32 clr) {}
-
-static inline void setup_kvm_el2_caps(void) {}
 #endif
 
 void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index de40a565d7e5..8ef9d88826d4 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -116,4 +116,7 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
 void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
 #endif
 
+extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
+extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
+
 #endif /* __ARM64_KVM_HYP_H__ */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 6252476e4e73..066030717a4c 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1154,18 +1154,6 @@ u64 read_sanitised_ftr_reg(u32 id)
 }
 EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg);
 
-int copy_ftr_reg(u32 id, struct arm64_ftr_reg *dst)
-{
-	struct arm64_ftr_reg *regp = get_arm64_ftr_reg(id);
-
-	if (!regp)
-		return -EINVAL;
-
-	*dst = *regp;
-
-	return 0;
-}
-
 #define read_sysreg_case(r)	\
 	case r:		val = read_sysreg_s(r); break;
 
@@ -2785,7 +2773,6 @@ void __init setup_cpu_features(void)
 
 	setup_system_capabilities();
 	setup_elf_hwcaps(arm64_elf_hwcaps);
-	setup_kvm_el2_caps();
 
 	if (system_supports_32bit_el0())
 		setup_elf_hwcaps(compat_elf_hwcaps);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
index 17ad1b3a9530..706056013eb0 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c
@@ -5,16 +5,14 @@
  */
 
 #include <asm/kvm_asm.h>
-#include <asm/kvm_cpufeature.h>
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
 
 /*
  * Copies of the host's CPU features registers holding sanitized values.
  */
-DEFINE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_ctrel0);
-DEFINE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_id_aa64mmfr0_el1);
-DEFINE_KVM_HYP_CPU_FTR_REG(arm64_ftr_reg_id_aa64mmfr1_el1);
+u64 id_aa64mmfr0_el1_sys_val;
+u64 id_aa64mmfr1_el1_sys_val;
 
 /*
  * nVHE copy of data structures tracking available CPU cores.
diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index 808e2471091b..be1148e763d0 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -5,7 +5,6 @@
  */
 
 #include <linux/kvm_host.h>
-#include <asm/kvm_cpufeature.h>
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
@@ -74,8 +73,8 @@ static void prepare_host_vtcr(void)
 	u32 parange, phys_shift;
 	u64 mmfr0, mmfr1;
 
-	mmfr0 = arm64_ftr_reg_id_aa64mmfr0_el1.sys_val;
-	mmfr1 = arm64_ftr_reg_id_aa64mmfr1_el1.sys_val;
+	mmfr0 = id_aa64mmfr0_el1_sys_val;
+	mmfr1 = id_aa64mmfr1_el1_sys_val;
 
 	/* The host stage 2 is id-mapped, so use parange for T0SZ */
 	parange = kvm_get_parange(mmfr0);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index dfb3b4f9ca84..3093ac1b1099 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -21,7 +21,6 @@
 #include <asm/debug-monitors.h>
 #include <asm/esr.h>
 #include <asm/kvm_arm.h>
-#include <asm/kvm_cpufeature.h>
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
@@ -2775,25 +2774,7 @@ void kvm_sys_reg_table_init(void)
 			break;
 	/* Clear all higher bits. */
 	cache_levels &= (1 << (i*3))-1;
-}
-
-#define CPU_FTR_REG_HYP_COPY(id, name) \
-	{ .sys_id = id, .dst = (struct arm64_ftr_reg *)&kvm_nvhe_sym(name) }
-struct __ftr_reg_copy_entry {
-	u32			sys_id;
-	struct arm64_ftr_reg	*dst;
-} hyp_ftr_regs[] __initdata = {
-	CPU_FTR_REG_HYP_COPY(SYS_CTR_EL0, arm64_ftr_reg_ctrel0),
-	CPU_FTR_REG_HYP_COPY(SYS_ID_AA64MMFR0_EL1, arm64_ftr_reg_id_aa64mmfr0_el1),
-	CPU_FTR_REG_HYP_COPY(SYS_ID_AA64MMFR1_EL1, arm64_ftr_reg_id_aa64mmfr1_el1),
-};
 
-void __init setup_kvm_el2_caps(void)
-{
-	int i;
-
-	for (i = 0; i < ARRAY_SIZE(hyp_ftr_regs); i++) {
-		WARN(copy_ftr_reg(hyp_ftr_regs[i].sys_id, hyp_ftr_regs[i].dst),
-		     "%u feature register not found\n", hyp_ftr_regs[i].sys_id);
-	}
+	kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+	kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
 }
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/3] KVM: arm64: Generate final CTR_EL0 value when running in Protected mode
  2021-03-22 16:48 ` [PATCH 2/3] KVM: arm64: Generate final CTR_EL0 value when running in Protected mode Marc Zyngier
@ 2021-03-22 17:40   ` Quentin Perret
  2021-03-22 18:37     ` Marc Zyngier
  0 siblings, 1 reply; 7+ messages in thread
From: Quentin Perret @ 2021-03-22 17:40 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: catalin.marinas, james.morse, julien.thierry.kdev,
	suzuki.poulose, android-kvm, seanjc, mate.toth-pal, linux-kernel,
	linux-arm-kernel, kernel-team, kvmarm, tabba, ardb, mark.rutland,
	dbrazdil

Hey Marc,

On Monday 22 Mar 2021 at 16:48:27 (+0000), Marc Zyngier wrote:
> In protected mode, late CPUs are not allowed to boot (enforced by
> the PSCI relay). We can thus specialise the read_ctr macro to
> always return a pre-computed, sanitised value.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/assembler.h | 9 +++++++++
>  arch/arm64/kernel/image-vars.h     | 1 +
>  arch/arm64/kvm/va_layout.c         | 7 +++++++
>  3 files changed, 17 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index fb651c1f26e9..1a4cee7eb3c9 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -270,12 +270,21 @@ alternative_endif
>   * provide the system wide safe value from arm64_ftr_reg_ctrel0.sys_val
>   */
>  	.macro	read_ctr, reg
> +#ifndef __KVM_NVHE_HYPERVISOR__
>  alternative_if_not ARM64_MISMATCHED_CACHE_TYPE
>  	mrs	\reg, ctr_el0			// read CTR
>  	nop
>  alternative_else
>  	ldr_l	\reg, arm64_ftr_reg_ctrel0 + ARM64_FTR_SYSVAL
>  alternative_endif
> +#else
> +alternative_cb kvm_compute_final_ctr_el0
> +	movz	\reg, #0
> +	movk	\reg, #0, lsl #16
> +	movk	\reg, #0, lsl #32
> +	movk	\reg, #0, lsl #48
> +alternative_cb_end
> +#endif
>  	.endm

So, FWIW, if we wanted to make _this_ macro BUG in non-protected mode
(and drop patch 01), I think we could do something like:

alternative_cb kvm_compute_final_ctr_el0
	movz	\reg, #0
	ASM_BUG()
	nop
	nop
alternative_cb_end

and then make kvm_compute_final_ctr_el0() check that we're in protected
mode before patching. That would be marginally better as that would
cover _all_ users of read_ctr and not just __flush_dcache_area, but that
first movz is a bit yuck (but necessary to keep generate_mov_q() happy I
think?), so I'll leave the decision to you.

No objection from me for the current implementation, and if you decide to
go with it:

Reviewed-by: Quentin Perret <qperret@google.com>

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/3] KVM: arm64: Generate final CTR_EL0 value when running in Protected mode
  2021-03-22 17:40   ` Quentin Perret
@ 2021-03-22 18:37     ` Marc Zyngier
  2021-03-23  9:47       ` Quentin Perret
  0 siblings, 1 reply; 7+ messages in thread
From: Marc Zyngier @ 2021-03-22 18:37 UTC (permalink / raw)
  To: Quentin Perret
  Cc: catalin.marinas, james.morse, julien.thierry.kdev,
	suzuki.poulose, android-kvm, seanjc, mate.toth-pal, linux-kernel,
	linux-arm-kernel, kernel-team, kvmarm, tabba, ardb, mark.rutland,
	dbrazdil

On Mon, 22 Mar 2021 17:40:40 +0000,
Quentin Perret <qperret@google.com> wrote:
> 
> Hey Marc,
> 
> On Monday 22 Mar 2021 at 16:48:27 (+0000), Marc Zyngier wrote:
> > In protected mode, late CPUs are not allowed to boot (enforced by
> > the PSCI relay). We can thus specialise the read_ctr macro to
> > always return a pre-computed, sanitised value.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/assembler.h | 9 +++++++++
> >  arch/arm64/kernel/image-vars.h     | 1 +
> >  arch/arm64/kvm/va_layout.c         | 7 +++++++
> >  3 files changed, 17 insertions(+)
> > 
> > diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> > index fb651c1f26e9..1a4cee7eb3c9 100644
> > --- a/arch/arm64/include/asm/assembler.h
> > +++ b/arch/arm64/include/asm/assembler.h
> > @@ -270,12 +270,21 @@ alternative_endif
> >   * provide the system wide safe value from arm64_ftr_reg_ctrel0.sys_val
> >   */
> >  	.macro	read_ctr, reg
> > +#ifndef __KVM_NVHE_HYPERVISOR__
> >  alternative_if_not ARM64_MISMATCHED_CACHE_TYPE
> >  	mrs	\reg, ctr_el0			// read CTR
> >  	nop
> >  alternative_else
> >  	ldr_l	\reg, arm64_ftr_reg_ctrel0 + ARM64_FTR_SYSVAL
> >  alternative_endif
> > +#else
> > +alternative_cb kvm_compute_final_ctr_el0
> > +	movz	\reg, #0
> > +	movk	\reg, #0, lsl #16
> > +	movk	\reg, #0, lsl #32
> > +	movk	\reg, #0, lsl #48
> > +alternative_cb_end
> > +#endif
> >  	.endm
> 
> So, FWIW, if we wanted to make _this_ macro BUG in non-protected mode
> (and drop patch 01), I think we could do something like:
> 
> alternative_cb kvm_compute_final_ctr_el0
> 	movz	\reg, #0
> 	ASM_BUG()
> 	nop
> 	nop
> alternative_cb_end
>
> and then make kvm_compute_final_ctr_el0() check that we're in protected
> mode before patching. That would be marginally better as that would
> cover _all_ users of read_ctr and not just __flush_dcache_area, but that
> first movz is a bit yuck (but necessary to keep generate_mov_q() happy I
> think?), so I'll leave the decision to you.

Can't say I'm keen on the yucky bit, but here's an alternative (ha!)
for you:

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 1a4cee7eb3c9..7582c3bd2f05 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -278,6 +278,9 @@ alternative_else
 	ldr_l	\reg, arm64_ftr_reg_ctrel0 + ARM64_FTR_SYSVAL
 alternative_endif
 #else
+alternative_if_not ARM64_KVM_PROTECTED_MODE
+	ASM_BUG()
+alternative_else_nop_endif
 alternative_cb kvm_compute_final_ctr_el0
 	movz	\reg, #0
 	movk	\reg, #0, lsl #16

Yes, it is one more instruction, but it is cleaner and allows us to
from the first patch of the series.

What do you think?

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/3] KVM: arm64: Generate final CTR_EL0 value when running in Protected mode
  2021-03-22 18:37     ` Marc Zyngier
@ 2021-03-23  9:47       ` Quentin Perret
  0 siblings, 0 replies; 7+ messages in thread
From: Quentin Perret @ 2021-03-23  9:47 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: catalin.marinas, james.morse, julien.thierry.kdev,
	suzuki.poulose, android-kvm, seanjc, mate.toth-pal, linux-kernel,
	linux-arm-kernel, kernel-team, kvmarm, tabba, ardb, mark.rutland,
	dbrazdil

Hi Marc,

On Monday 22 Mar 2021 at 18:37:14 (+0000), Marc Zyngier wrote:
> Can't say I'm keen on the yucky bit, but here's an alternative (ha!)
> for you:
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 1a4cee7eb3c9..7582c3bd2f05 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -278,6 +278,9 @@ alternative_else
>  	ldr_l	\reg, arm64_ftr_reg_ctrel0 + ARM64_FTR_SYSVAL
>  alternative_endif
>  #else
> +alternative_if_not ARM64_KVM_PROTECTED_MODE
> +	ASM_BUG()
> +alternative_else_nop_endif
>  alternative_cb kvm_compute_final_ctr_el0
>  	movz	\reg, #0
>  	movk	\reg, #0, lsl #16
> 
> Yes, it is one more instruction, but it is cleaner and allows us to
> from the first patch of the series.
> 
> What do you think?

Yes, I think having the ASM_BUG() in this macro is bit nicer and I doubt
the additional nop will make any difference, so this is looking good to
me!

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-03-23  9:47 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-22 16:48 [PATCH 0/3] KVM:arm64: Proposed host stage-2 improvements Marc Zyngier
2021-03-22 16:48 ` [PATCH 1/3] KVM: arm64: Constraint KVM's own __flush_dcache_area to protectected mode Marc Zyngier
2021-03-22 16:48 ` [PATCH 2/3] KVM: arm64: Generate final CTR_EL0 value when running in Protected mode Marc Zyngier
2021-03-22 17:40   ` Quentin Perret
2021-03-22 18:37     ` Marc Zyngier
2021-03-23  9:47       ` Quentin Perret
2021-03-22 16:48 ` [PATCH 3/3] KVM: arm64: Drop the CPU_FTR_REG_HYP_COPY infrastructure Marc Zyngier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).