kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] arm64: add finalized cap helper
@ 2020-02-10 12:27 Mark Rutland
  2020-02-10 12:27 ` [PATCH 1/2] arm64: cpufeature: add cpus_have_final_cap() Mark Rutland
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Mark Rutland @ 2020-02-10 12:27 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: catalin.marinas, maz, will, kvmarm

Across arm64 we use cpus_have_const_cap() to check for a capability
without a runtime check. Prior to capabilities being finalized
cpus_have_const_cap() falls back to a runtime check of the cpu_hwcaps
array. In some cases we know that code is never invoked prior to the
capabilities being finalized, and the fallback code is redundant (and
unsound if ever executed in hyp context).

So that we can avoid the redundant code and detect when the caps are
unexpectedly checked too early, this series adds a new
cpus_have_final_cap() helper, and migrates the KVM hyp code over to it.

I'm hoping to use this as part of the entry.S -> entry-common.c
conversion, and there are other places in arm64 that could potentially
use this today.

Thanks,
Mark.

Mark Rutland (2):
  arm64: cpufeature: add cpus_have_final_cap()
  arm64: kvm: hyp: use cpus_have_final_cap()

 arch/arm64/include/asm/cpufeature.h | 47 +++++++++++++++++++++++++++++++++----
 arch/arm64/kvm/hyp/switch.c         | 14 +++++------
 arch/arm64/kvm/hyp/sysreg-sr.c      |  8 +++----
 arch/arm64/kvm/hyp/tlb.c            |  8 +++----
 4 files changed, 57 insertions(+), 20 deletions(-)

-- 
2.11.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] arm64: cpufeature: add cpus_have_final_cap()
  2020-02-10 12:27 [PATCH 0/2] arm64: add finalized cap helper Mark Rutland
@ 2020-02-10 12:27 ` Mark Rutland
  2020-02-10 16:37   ` Suzuki Kuruppassery Poulose
  2020-02-10 12:27 ` [PATCH 2/2] arm64: kvm: hyp: use cpus_have_final_cap() Mark Rutland
  2020-02-17 14:48 ` [PATCH 0/2] arm64: add finalized cap helper Marc Zyngier
  2 siblings, 1 reply; 6+ messages in thread
From: Mark Rutland @ 2020-02-10 12:27 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: catalin.marinas, maz, will, kvmarm

When cpus_have_const_cap() was originally introduced it was intended to
be safe in hyp context, where it is not safe to access the cpu_hwcaps
array as cpus_have_cap() did. For more details see commit:

  a4023f682739439b ("arm64: Add hypervisor safe helper for checking constant capabilities")

We then made use of cpus_have_const_cap() throughout the kernel.

Subsequently, we had to defer updating the static_key associated with
each capability in order to avoid lockdep complaints. To avoid breaking
kernel-wide usage of cpus_have_const_cap(), this was updated to fall
back to the cpu_hwcaps array if called before the static_keys were
updated. As the kvm hyp code was only called later than this, the
fallback is redundant but not functionally harmful. For more details,
see commit:

  63a1e1c95e60e798 ("arm64/cpufeature: don't use mutex in bringup path")

Today we have more users of cpus_have_const_cap() which are only called
once the relevant static keys are initialized, and it would be
beneficial to avoid the redundant code.

To that end, this patch adds a new cpus_have_final_cap(), helper which
is intend to be used in code which is only run once capabilities have
been finalized, and will never check the cpus_hwcap array. This helps
the compiler to generate better code as it no longer needs to generate
code to address and test the cpus_hwcap array. To help catch misuse,
cpus_have_final_cap() will BUG() if called before capabilities are
finalized.

In hyp context, BUG() will result in a hyp panic, but the specific BUG()
instance will not be identified in the usual way.

Comments are added to the various cpus_have_*_cap() helpers to describe
the constraints on when they can be used. For clarity cpus_have_cap() is
moved above the other helpers.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cpufeature.h | 47 +++++++++++++++++++++++++++++++++----
 1 file changed, 42 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 92ef9539874a..bf752282bfc0 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -390,21 +390,42 @@ unsigned long cpu_get_elf_hwcap2(void);
 #define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name))
 #define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
 
-/* System capability check for constant caps */
-static __always_inline bool __cpus_have_const_cap(int num)
+/*
+ * Test for a capability with a runtime check.
+ *
+ * Before the capability is detected, this returns false.
+ */
+static inline bool cpus_have_cap(unsigned int num)
 {
 	if (num >= ARM64_NCAPS)
 		return false;
-	return static_branch_unlikely(&cpu_hwcap_keys[num]);
+	return test_bit(num, cpu_hwcaps);
 }
 
-static inline bool cpus_have_cap(unsigned int num)
+/*
+ * Test for a capability without a runtime check.
+ *
+ * Before capabilities are finalized, this returns false.
+ * After capabilities are finalized, this is patched to avoid a runtime check.
+ *
+ * @num must be a compile-time constant.
+ */
+static __always_inline bool __cpus_have_const_cap(int num)
 {
 	if (num >= ARM64_NCAPS)
 		return false;
-	return test_bit(num, cpu_hwcaps);
+	return static_branch_unlikely(&cpu_hwcap_keys[num]);
 }
 
+
+/*
+ * Test for a capability, possibly with a runtime check.
+ *
+ * Before capabilities are finalized, this behaves as cpus_have_cap().
+ * After capabilities are finalized, this is patched to avoid a runtime check.
+ *
+ * @num must be a compile-time constant.
+ */
 static __always_inline bool cpus_have_const_cap(int num)
 {
 	if (static_branch_likely(&arm64_const_caps_ready))
@@ -413,6 +434,22 @@ static __always_inline bool cpus_have_const_cap(int num)
 		return cpus_have_cap(num);
 }
 
+/*
+ * Test for a capability without a runtime check.
+ *
+ * Before capabilities are finalized, this will BUG().
+ * After capabilities are finalized, this is patched to avoid a runtime check.
+ *
+ * @num must be a compile-time constant.
+ */
+static __always_inline bool cpus_have_final_cap(int num)
+{
+	if (static_branch_likely(&arm64_const_caps_ready))
+		return __cpus_have_const_cap(num);
+	else
+		BUG();
+}
+
 static inline void cpus_set_cap(unsigned int num)
 {
 	if (num >= ARM64_NCAPS) {
-- 
2.11.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] arm64: kvm: hyp: use cpus_have_final_cap()
  2020-02-10 12:27 [PATCH 0/2] arm64: add finalized cap helper Mark Rutland
  2020-02-10 12:27 ` [PATCH 1/2] arm64: cpufeature: add cpus_have_final_cap() Mark Rutland
@ 2020-02-10 12:27 ` Mark Rutland
  2020-02-17 14:48 ` [PATCH 0/2] arm64: add finalized cap helper Marc Zyngier
  2 siblings, 0 replies; 6+ messages in thread
From: Mark Rutland @ 2020-02-10 12:27 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: catalin.marinas, maz, will, kvmarm

The KVM hyp code is only run after system capabilities have been
finalized, and thus all const cap checks have been patched. This is
noted in in __cpu_init_hyp_mode(), where we BUG() if called too early:

| /*
|  * Call initialization code, and switch to the full blown HYP code.
|  * If the cpucaps haven't been finalized yet, something has gone very
|  * wrong, and hyp will crash and burn when it uses any
|  * cpus_have_const_cap() wrapper.
|  */

Given this, the hyp code can use cpus_have_final_cap() and avoid
generating code to check the cpu_hwcaps array, which would be unsafe to
run in hyp context.

This patch migrate the KVM hyp code to cpus_have_final_cap(), avoiding
this redundant code generation, and making it possible to detect if we
accidentally invoke this code too early. In the latter case, the BUG()
in cpus_have_final_cap() will cause a hyp panic.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulouse <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/hyp/switch.c    | 14 +++++++-------
 arch/arm64/kvm/hyp/sysreg-sr.c |  8 ++++----
 arch/arm64/kvm/hyp/tlb.c       |  8 ++++----
 3 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index dfe8dd172512..27fcdff08dd6 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -127,7 +127,7 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu)
 
 	write_sysreg(val, cptr_el2);
 
-	if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
+	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
 		struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
 
 		isb();
@@ -146,12 +146,12 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 {
 	u64 hcr = vcpu->arch.hcr_el2;
 
-	if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM))
+	if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM))
 		hcr |= HCR_TVM;
 
 	write_sysreg(hcr, hcr_el2);
 
-	if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE))
+	if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE))
 		write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2);
 
 	if (has_vhe())
@@ -181,7 +181,7 @@ static void __hyp_text __deactivate_traps_nvhe(void)
 {
 	u64 mdcr_el2 = read_sysreg(mdcr_el2);
 
-	if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
+	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
 		u64 val;
 
 		/*
@@ -328,7 +328,7 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
 	 * resolve the IPA using the AT instruction.
 	 */
 	if (!(esr & ESR_ELx_S1PTW) &&
-	    (cpus_have_const_cap(ARM64_WORKAROUND_834220) ||
+	    (cpus_have_final_cap(ARM64_WORKAROUND_834220) ||
 	     (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) {
 		if (!__translate_far_to_hpfar(far, &hpfar))
 			return false;
@@ -498,7 +498,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 	if (*exit_code != ARM_EXCEPTION_TRAP)
 		goto exit;
 
-	if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
+	if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
 	    kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 &&
 	    handle_tx2_tvm(vcpu))
 		return true;
@@ -555,7 +555,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 
 static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu)
 {
-	if (!cpus_have_const_cap(ARM64_SSBD))
+	if (!cpus_have_final_cap(ARM64_SSBD))
 		return false;
 
 	return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG);
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 7672a978926c..75b1925763f1 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -71,7 +71,7 @@ static void __hyp_text __sysreg_save_el2_return_state(struct kvm_cpu_context *ct
 	ctxt->gp_regs.regs.pc		= read_sysreg_el2(SYS_ELR);
 	ctxt->gp_regs.regs.pstate	= read_sysreg_el2(SYS_SPSR);
 
-	if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN))
+	if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
 		ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2);
 }
 
@@ -118,7 +118,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt->sys_regs[MPIDR_EL1],		vmpidr_el2);
 	write_sysreg(ctxt->sys_regs[CSSELR_EL1],	csselr_el1);
 
-	if (!cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
+	if (!cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
 		write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1],	SYS_SCTLR);
 		write_sysreg_el1(ctxt->sys_regs[TCR_EL1],	SYS_TCR);
 	} else	if (!ctxt->__hyp_running_vcpu) {
@@ -149,7 +149,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
 	write_sysreg(ctxt->sys_regs[PAR_EL1],		par_el1);
 	write_sysreg(ctxt->sys_regs[TPIDR_EL1],		tpidr_el1);
 
-	if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE) &&
+	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE) &&
 	    ctxt->__hyp_running_vcpu) {
 		/*
 		 * Must only be done for host registers, hence the context
@@ -194,7 +194,7 @@ __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt)
 	write_sysreg_el2(ctxt->gp_regs.regs.pc,		SYS_ELR);
 	write_sysreg_el2(pstate,			SYS_SPSR);
 
-	if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN))
+	if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
 		write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2);
 }
 
diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
index 92f560e3e1aa..ceaddbe4279f 100644
--- a/arch/arm64/kvm/hyp/tlb.c
+++ b/arch/arm64/kvm/hyp/tlb.c
@@ -23,7 +23,7 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm,
 
 	local_irq_save(cxt->flags);
 
-	if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) {
+	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) {
 		/*
 		 * For CPUs that are affected by ARM errata 1165522 or 1530923,
 		 * we cannot trust stage-1 to be in a correct state at that
@@ -63,7 +63,7 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm,
 static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm,
 						  struct tlb_inv_context *cxt)
 {
-	if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
+	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
 		u64 val;
 
 		/*
@@ -103,7 +103,7 @@ static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm,
 	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
 	isb();
 
-	if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) {
+	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_VHE)) {
 		/* Restore the registers to what they were */
 		write_sysreg_el1(cxt->tcr, SYS_TCR);
 		write_sysreg_el1(cxt->sctlr, SYS_SCTLR);
@@ -117,7 +117,7 @@ static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm,
 {
 	write_sysreg(0, vttbr_el2);
 
-	if (cpus_have_const_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
+	if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) {
 		/* Ensure write of the host VMID */
 		isb();
 		/* Restore the host's TCR_EL1 */
-- 
2.11.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] arm64: cpufeature: add cpus_have_final_cap()
  2020-02-10 12:27 ` [PATCH 1/2] arm64: cpufeature: add cpus_have_final_cap() Mark Rutland
@ 2020-02-10 16:37   ` Suzuki Kuruppassery Poulose
  2020-02-10 17:37     ` Mark Rutland
  0 siblings, 1 reply; 6+ messages in thread
From: Suzuki Kuruppassery Poulose @ 2020-02-10 16:37 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel; +Cc: catalin.marinas, maz, will, kvmarm

On 10/02/2020 12:27, Mark Rutland wrote:
> When cpus_have_const_cap() was originally introduced it was intended to
> be safe in hyp context, where it is not safe to access the cpu_hwcaps
> array as cpus_have_cap() did. For more details see commit:
> 
>    a4023f682739439b ("arm64: Add hypervisor safe helper for checking constant capabilities")
> 
> We then made use of cpus_have_const_cap() throughout the kernel.
> 
> Subsequently, we had to defer updating the static_key associated with
> each capability in order to avoid lockdep complaints. To avoid breaking
> kernel-wide usage of cpus_have_const_cap(), this was updated to fall
> back to the cpu_hwcaps array if called before the static_keys were
> updated. As the kvm hyp code was only called later than this, the
> fallback is redundant but not functionally harmful. For more details,
> see commit:
> 
>    63a1e1c95e60e798 ("arm64/cpufeature: don't use mutex in bringup path")
> 
> Today we have more users of cpus_have_const_cap() which are only called
> once the relevant static keys are initialized, and it would be
> beneficial to avoid the redundant code.
> 
> To that end, this patch adds a new cpus_have_final_cap(), helper which
> is intend to be used in code which is only run once capabilities have
> been finalized, and will never check the cpus_hwcap array. This helps
> the compiler to generate better code as it no longer needs to generate
> code to address and test the cpus_hwcap array. To help catch misuse,
> cpus_have_final_cap() will BUG() if called before capabilities are
> finalized.
> 
> In hyp context, BUG() will result in a hyp panic, but the specific BUG()
> instance will not be identified in the usual way.
> 
> Comments are added to the various cpus_have_*_cap() helpers to describe
> the constraints on when they can be used. For clarity cpus_have_cap() is
> moved above the other helpers.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Suzuki Poulose <suzuki.poulose@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---

...

> +/*
> + * Test for a capability without a runtime check.
> + *
> + * Before capabilities are finalized, this will BUG().
> + * After capabilities are finalized, this is patched to avoid a runtime check.
> + *
> + * @num must be a compile-time constant.
> + */
> +static __always_inline bool cpus_have_final_cap(int num)
> +{
> +	if (static_branch_likely(&arm64_const_caps_ready))

We have introduced system_capabilities_finalized() helper and may be
it is a good idea to use it here, to make it more clear.

Either ways :

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] arm64: cpufeature: add cpus_have_final_cap()
  2020-02-10 16:37   ` Suzuki Kuruppassery Poulose
@ 2020-02-10 17:37     ` Mark Rutland
  0 siblings, 0 replies; 6+ messages in thread
From: Mark Rutland @ 2020-02-10 17:37 UTC (permalink / raw)
  To: Suzuki Kuruppassery Poulose
  Cc: catalin.marinas, linux-arm-kernel, maz, will, kvmarm

On Mon, Feb 10, 2020 at 04:37:53PM +0000, Suzuki Kuruppassery Poulose wrote:
> On 10/02/2020 12:27, Mark Rutland wrote:
> > When cpus_have_const_cap() was originally introduced it was intended to
> > be safe in hyp context, where it is not safe to access the cpu_hwcaps
> > array as cpus_have_cap() did. For more details see commit:
> > 
> >    a4023f682739439b ("arm64: Add hypervisor safe helper for checking constant capabilities")
> > 
> > We then made use of cpus_have_const_cap() throughout the kernel.
> > 
> > Subsequently, we had to defer updating the static_key associated with
> > each capability in order to avoid lockdep complaints. To avoid breaking
> > kernel-wide usage of cpus_have_const_cap(), this was updated to fall
> > back to the cpu_hwcaps array if called before the static_keys were
> > updated. As the kvm hyp code was only called later than this, the
> > fallback is redundant but not functionally harmful. For more details,
> > see commit:
> > 
> >    63a1e1c95e60e798 ("arm64/cpufeature: don't use mutex in bringup path")
> > 
> > Today we have more users of cpus_have_const_cap() which are only called
> > once the relevant static keys are initialized, and it would be
> > beneficial to avoid the redundant code.
> > 
> > To that end, this patch adds a new cpus_have_final_cap(), helper which
> > is intend to be used in code which is only run once capabilities have
> > been finalized, and will never check the cpus_hwcap array. This helps
> > the compiler to generate better code as it no longer needs to generate
> > code to address and test the cpus_hwcap array. To help catch misuse,
> > cpus_have_final_cap() will BUG() if called before capabilities are
> > finalized.
> > 
> > In hyp context, BUG() will result in a hyp panic, but the specific BUG()
> > instance will not be identified in the usual way.
> > 
> > Comments are added to the various cpus_have_*_cap() helpers to describe
> > the constraints on when they can be used. For clarity cpus_have_cap() is
> > moved above the other helpers.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Marc Zyngier <maz@kernel.org>
> > Cc: Suzuki Poulose <suzuki.poulose@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> 
> ...
> 
> > +/*
> > + * Test for a capability without a runtime check.
> > + *
> > + * Before capabilities are finalized, this will BUG().
> > + * After capabilities are finalized, this is patched to avoid a runtime check.
> > + *
> > + * @num must be a compile-time constant.
> > + */
> > +static __always_inline bool cpus_have_final_cap(int num)
> > +{
> > +	if (static_branch_likely(&arm64_const_caps_ready))
> 
> We have introduced system_capabilities_finalized() helper and may be
> it is a good idea to use it here, to make it more clear.

Sure thing. There are a few existing uses that could be moved over, so I
can move that up for v2.

> Either ways :
> 
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

Thanks!

Mark.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/2] arm64: add finalized cap helper
  2020-02-10 12:27 [PATCH 0/2] arm64: add finalized cap helper Mark Rutland
  2020-02-10 12:27 ` [PATCH 1/2] arm64: cpufeature: add cpus_have_final_cap() Mark Rutland
  2020-02-10 12:27 ` [PATCH 2/2] arm64: kvm: hyp: use cpus_have_final_cap() Mark Rutland
@ 2020-02-17 14:48 ` Marc Zyngier
  2 siblings, 0 replies; 6+ messages in thread
From: Marc Zyngier @ 2020-02-17 14:48 UTC (permalink / raw)
  To: Mark Rutland; +Cc: catalin.marinas, linux-arm-kernel, will, kvmarm

On 2020-02-10 12:27, Mark Rutland wrote:
> Across arm64 we use cpus_have_const_cap() to check for a capability
> without a runtime check. Prior to capabilities being finalized
> cpus_have_const_cap() falls back to a runtime check of the cpu_hwcaps
> array. In some cases we know that code is never invoked prior to the
> capabilities being finalized, and the fallback code is redundant (and
> unsound if ever executed in hyp context).
> 
> So that we can avoid the redundant code and detect when the caps are
> unexpectedly checked too early, this series adds a new
> cpus_have_final_cap() helper, and migrates the KVM hyp code over to it.
> 
> I'm hoping to use this as part of the entry.S -> entry-common.c
> conversion, and there are other places in arm64 that could potentially
> use this today.
> 
> Thanks,
> Mark.
> 
> Mark Rutland (2):
>   arm64: cpufeature: add cpus_have_final_cap()
>   arm64: kvm: hyp: use cpus_have_final_cap()
> 
>  arch/arm64/include/asm/cpufeature.h | 47 
> +++++++++++++++++++++++++++++++++----
>  arch/arm64/kvm/hyp/switch.c         | 14 +++++------
>  arch/arm64/kvm/hyp/sysreg-sr.c      |  8 +++----
>  arch/arm64/kvm/hyp/tlb.c            |  8 +++----
>  4 files changed, 57 insertions(+), 20 deletions(-)

Seems like a valuable optimization. For the series:

Reviewed-by: Marc Zyngier <maz@kernel.org>

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-02-17 14:48 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-10 12:27 [PATCH 0/2] arm64: add finalized cap helper Mark Rutland
2020-02-10 12:27 ` [PATCH 1/2] arm64: cpufeature: add cpus_have_final_cap() Mark Rutland
2020-02-10 16:37   ` Suzuki Kuruppassery Poulose
2020-02-10 17:37     ` Mark Rutland
2020-02-10 12:27 ` [PATCH 2/2] arm64: kvm: hyp: use cpus_have_final_cap() Mark Rutland
2020-02-17 14:48 ` [PATCH 0/2] arm64: add finalized cap helper Marc Zyngier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).