linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] KVM: arm64: Drop support for VPIPT i-cache policy
@ 2023-01-13 17:25 Marc Zyngier
  2023-01-13 17:25 ` [PATCH 1/2] KVM: arm64: Disable KVM on systems with a VPIPT i-cache Marc Zyngier
  2023-01-13 17:25 ` [PATCH 2/2] KVM: arm64: Remove VPIPT I-cache handling Marc Zyngier
  0 siblings, 2 replies; 5+ messages in thread
From: Marc Zyngier @ 2023-01-13 17:25 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Will Deacon, Catalin Marinas, Mark Rutland, James Morse,
	Suzuki K Poulose, Oliver Upton, Zenghui Yu

ARMv8.2 introduced support for VPIPT i-caches, the V standing for
VMID-tagged. Although this looks like a reasonable idea, no
implementation has ever made it into the wild.

Linux has supported this for almost 6 years (amusingly, just as the
architecture was dropping support for AVIVT i-caches), but we have no
way to even test it, and it is likely that this code is just
bit-rotting. This isn't great.

Since this only impacts KVM, let's drop the support from the
hypervisor. The kernel itself can still work as a guest on such a
system, assuming that there is HW and a hypervisor that supports this
architecture variation.

On the other hand, if you are aware of such an implementation and can
actively test KVM on this setup, please shout!

	M.

Marc Zyngier (2):
  KVM: arm64: Disable KVM on systems with a VPIPT i-cache
  KVM: arm64: Remove VPIPT I-cache handling

 arch/arm64/include/asm/kvm_mmu.h |  3 +--
 arch/arm64/kvm/arm.c             |  5 +++++
 arch/arm64/kvm/hyp/nvhe/tlb.c    | 35 --------------------------------
 arch/arm64/kvm/hyp/vhe/tlb.c     | 13 ------------
 4 files changed, 6 insertions(+), 50 deletions(-)

-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] KVM: arm64: Disable KVM on systems with a VPIPT i-cache
  2023-01-13 17:25 [PATCH 0/2] KVM: arm64: Drop support for VPIPT i-cache policy Marc Zyngier
@ 2023-01-13 17:25 ` Marc Zyngier
  2023-01-20 10:14   ` Will Deacon
  2023-01-13 17:25 ` [PATCH 2/2] KVM: arm64: Remove VPIPT I-cache handling Marc Zyngier
  1 sibling, 1 reply; 5+ messages in thread
From: Marc Zyngier @ 2023-01-13 17:25 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Will Deacon, Catalin Marinas, Mark Rutland, James Morse,
	Suzuki K Poulose, Oliver Upton, Zenghui Yu

Systems with a VMID-tagged PIPT i-cache have been supported for
a while by Linux and KVM. However, these systems never appeared
on our side of the multiverse.

Refuse to initialise KVM on such a machine, should then ever appear.
Following changes will drop the support from the hypervisor.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/arm.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 9c5573bc4614..508deed213a2 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -2195,6 +2195,11 @@ int kvm_arch_init(void *opaque)
 	int err;
 	bool in_hyp_mode;
 
+	if (icache_is_vpipt()) {
+		kvm_info("Incompatible VPIPT I-Cache policy\n");
+		return -ENODEV;
+	}
+
 	if (!is_hyp_mode_available()) {
 		kvm_info("HYP mode not available\n");
 		return -ENODEV;
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] KVM: arm64: Remove VPIPT I-cache handling
  2023-01-13 17:25 [PATCH 0/2] KVM: arm64: Drop support for VPIPT i-cache policy Marc Zyngier
  2023-01-13 17:25 ` [PATCH 1/2] KVM: arm64: Disable KVM on systems with a VPIPT i-cache Marc Zyngier
@ 2023-01-13 17:25 ` Marc Zyngier
  1 sibling, 0 replies; 5+ messages in thread
From: Marc Zyngier @ 2023-01-13 17:25 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Will Deacon, Catalin Marinas, Mark Rutland, James Morse,
	Suzuki K Poulose, Oliver Upton, Zenghui Yu

We have some special handling for VPIPT I-cache in critical parts
of the cache and TLB maintenance. Remove it.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_mmu.h |  3 +--
 arch/arm64/kvm/hyp/nvhe/tlb.c    | 35 --------------------------------
 arch/arm64/kvm/hyp/vhe/tlb.c     | 13 ------------
 3 files changed, 1 insertion(+), 50 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index e4a7e6369499..e79a37e22801 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -214,8 +214,7 @@ static inline void __invalidate_icache_guest_page(void *va, size_t size)
 	if (icache_is_aliasing()) {
 		/* any kind of VIPT cache */
 		icache_inval_all_pou();
-	} else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
-		/* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
+	} else {
 		icache_inval_pou((unsigned long)va, (unsigned long)va + size);
 	}
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
index d296d617f589..291789df24e3 100644
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
@@ -84,28 +84,6 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
 	dsb(ish);
 	isb();
 
-	/*
-	 * If the host is running at EL1 and we have a VPIPT I-cache,
-	 * then we must perform I-cache maintenance at EL2 in order for
-	 * it to have an effect on the guest. Since the guest cannot hit
-	 * I-cache lines allocated with a different VMID, we don't need
-	 * to worry about junk out of guest reset (we nuke the I-cache on
-	 * VMID rollover), but we do need to be careful when remapping
-	 * executable pages for the same guest. This can happen when KSM
-	 * takes a CoW fault on an executable page, copies the page into
-	 * a page that was previously mapped in the guest and then needs
-	 * to invalidate the guest view of the I-cache for that page
-	 * from EL1. To solve this, we invalidate the entire I-cache when
-	 * unmapping a page from a guest if we have a VPIPT I-cache but
-	 * the host is running at EL1. As above, we could do better if
-	 * we had the VA.
-	 *
-	 * The moral of this story is: if you have a VPIPT I-cache, then
-	 * you should be running with VHE enabled.
-	 */
-	if (icache_is_vpipt())
-		icache_inval_all_pou();
-
 	__tlb_switch_to_host(&cxt);
 }
 
@@ -144,18 +122,5 @@ void __kvm_flush_vm_context(void)
 {
 	dsb(ishst);
 	__tlbi(alle1is);
-
-	/*
-	 * VIPT and PIPT caches are not affected by VMID, so no maintenance
-	 * is necessary across a VMID rollover.
-	 *
-	 * VPIPT caches constrain lookup and maintenance to the active VMID,
-	 * so we need to invalidate lines with a stale VMID to avoid an ABA
-	 * race after multiple rollovers.
-	 *
-	 */
-	if (icache_is_vpipt())
-		asm volatile("ic ialluis");
-
 	dsb(ish);
 }
diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c
index 24cef9b87f9e..fc3fcd29ccc3 100644
--- a/arch/arm64/kvm/hyp/vhe/tlb.c
+++ b/arch/arm64/kvm/hyp/vhe/tlb.c
@@ -146,18 +146,5 @@ void __kvm_flush_vm_context(void)
 {
 	dsb(ishst);
 	__tlbi(alle1is);
-
-	/*
-	 * VIPT and PIPT caches are not affected by VMID, so no maintenance
-	 * is necessary across a VMID rollover.
-	 *
-	 * VPIPT caches constrain lookup and maintenance to the active VMID,
-	 * so we need to invalidate lines with a stale VMID to avoid an ABA
-	 * race after multiple rollovers.
-	 *
-	 */
-	if (icache_is_vpipt())
-		asm volatile("ic ialluis");
-
 	dsb(ish);
 }
-- 
2.34.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] KVM: arm64: Disable KVM on systems with a VPIPT i-cache
  2023-01-13 17:25 ` [PATCH 1/2] KVM: arm64: Disable KVM on systems with a VPIPT i-cache Marc Zyngier
@ 2023-01-20 10:14   ` Will Deacon
  2023-01-20 11:49     ` Marc Zyngier
  0 siblings, 1 reply; 5+ messages in thread
From: Will Deacon @ 2023-01-20 10:14 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Catalin Marinas, Mark Rutland,
	James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu

On Fri, Jan 13, 2023 at 05:25:22PM +0000, Marc Zyngier wrote:
> Systems with a VMID-tagged PIPT i-cache have been supported for
> a while by Linux and KVM. However, these systems never appeared
> on our side of the multiverse.
> 
> Refuse to initialise KVM on such a machine, should then ever appear.
> Following changes will drop the support from the hypervisor.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/arm.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 9c5573bc4614..508deed213a2 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -2195,6 +2195,11 @@ int kvm_arch_init(void *opaque)
>  	int err;
>  	bool in_hyp_mode;
>  
> +	if (icache_is_vpipt()) {
> +		kvm_info("Incompatible VPIPT I-Cache policy\n");
> +		return -ENODEV;
> +	}

Hmm, does this work properly with late CPU onlining? For example, if my set
of boot CPUs are all friendly PIPT and KVM initialises happily, but then I
late online a CPU with a horrible VPIPT policy, I worry that we'll quietly
do the wrong thing wrt maintenance.

If that's the case, then arguably we already have a bug in the cases where
we trap and emulate accesses to CTR_EL0 from userspace because I _think_
we'll change the L1Ip field at runtime after userspace could've already read
it.

Is there something that stops us from ended up in this situation?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] KVM: arm64: Disable KVM on systems with a VPIPT i-cache
  2023-01-20 10:14   ` Will Deacon
@ 2023-01-20 11:49     ` Marc Zyngier
  0 siblings, 0 replies; 5+ messages in thread
From: Marc Zyngier @ 2023-01-20 11:49 UTC (permalink / raw)
  To: Will Deacon, Suzuki K Poulose
  Cc: kvmarm, linux-arm-kernel, kvm, Catalin Marinas, Mark Rutland,
	James Morse, Oliver Upton, Zenghui Yu

On Fri, 20 Jan 2023 10:14:16 +0000,
Will Deacon <will@kernel.org> wrote:
> 
> On Fri, Jan 13, 2023 at 05:25:22PM +0000, Marc Zyngier wrote:
> > Systems with a VMID-tagged PIPT i-cache have been supported for
> > a while by Linux and KVM. However, these systems never appeared
> > on our side of the multiverse.
> > 
> > Refuse to initialise KVM on such a machine, should then ever appear.
> > Following changes will drop the support from the hypervisor.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/arm.c | 5 +++++
> >  1 file changed, 5 insertions(+)
> > 
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 9c5573bc4614..508deed213a2 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -2195,6 +2195,11 @@ int kvm_arch_init(void *opaque)
> >  	int err;
> >  	bool in_hyp_mode;
> >  
> > +	if (icache_is_vpipt()) {
> > +		kvm_info("Incompatible VPIPT I-Cache policy\n");
> > +		return -ENODEV;
> > +	}
> 
> Hmm, does this work properly with late CPU onlining? For example, if my set
> of boot CPUs are all friendly PIPT and KVM initialises happily, but then I
> late online a CPU with a horrible VPIPT policy, I worry that we'll quietly
> do the wrong thing wrt maintenance.

Yup. The problem is what do we do in that case? Apart from preventing
the late onlining itself?

> 
> If that's the case, then arguably we already have a bug in the cases where
> we trap and emulate accesses to CTR_EL0 from userspace because I _think_
> we'll change the L1Ip field at runtime after userspace could've already read
> it.
> 
> Is there something that stops us from ended up in this situation?

Probably not. Userspace will observe the wrong thing, and this applies
to *any* late onlining with a more restrictive cache topology (such as
PIPT -> VIPT). Unclear how the trapping will be engaged on the *other*
CPUs as well...

I've tried to reverse-engineer the cpufeature arrays again, and failed
to find a good solution for this.

Suzuki, what do you think?

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-01-20 11:51 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-13 17:25 [PATCH 0/2] KVM: arm64: Drop support for VPIPT i-cache policy Marc Zyngier
2023-01-13 17:25 ` [PATCH 1/2] KVM: arm64: Disable KVM on systems with a VPIPT i-cache Marc Zyngier
2023-01-20 10:14   ` Will Deacon
2023-01-20 11:49     ` Marc Zyngier
2023-01-13 17:25 ` [PATCH 2/2] KVM: arm64: Remove VPIPT I-cache handling Marc Zyngier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).