linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation
@ 2021-08-06 11:31 Will Deacon
  2021-08-06 11:31 ` [PATCH 1/4] arm64: mm: Fix TLBI vs ASID rollover Will Deacon
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Will Deacon @ 2021-08-06 11:31 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: kernel-team, Will Deacon, Catalin Marinas, Marc Zyngier,
	Jade Alglave, Shameer Kolothum, kvmarm, linux-arch

Hi all,

While reviewing Shameer's reworked VMID allocator [1] and discussing
with Marc, we spotted a race between TLB invalidation (which typically
takes an ASID or VMID argument) and reallocation of ASID/VMID for the
context being targetted.

The first patch spells out an example with try_to_unmap_one() in a
comment, which Catalin has kindly modelled in TLA+ at [2].

Although I'm posting all this together for ease of review, the intention
is that the first patch will go via arm64 with the latter going via kvm.

Cheers,

Will

[1] https://lore.kernel.org/r/20210729104009.382-1-shameerali.kolothum.thodi@huawei.com
[2] https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/commit/

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Jade Alglave <jade.alglave@arm.com>
Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Cc: <kvmarm@lists.cs.columbia.edu>
Cc: <linux-arch@vger.kernel.org>

--->8

Marc Zyngier (3):
  KVM: arm64: Move kern_hyp_va() usage in __load_guest_stage2() into the
    callers
  KVM: arm64: Convert the host S2 over to __load_guest_stage2()
  KVM: arm64: Upgrade VMID accesses to {READ,WRITE}_ONCE

Will Deacon (1):
  arm64: mm: Fix TLBI vs ASID rollover

 arch/arm64/include/asm/kvm_mmu.h              | 17 ++++++-----
 arch/arm64/include/asm/mmu.h                  | 29 ++++++++++++++++---
 arch/arm64/include/asm/tlbflush.h             | 11 +++----
 arch/arm64/kvm/arm.c                          |  2 +-
 arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +-
 arch/arm64/kvm/hyp/nvhe/mem_protect.c         |  6 ++--
 arch/arm64/kvm/hyp/nvhe/switch.c              |  4 ++-
 arch/arm64/kvm/hyp/nvhe/tlb.c                 |  2 +-
 arch/arm64/kvm/hyp/vhe/switch.c               |  2 +-
 arch/arm64/kvm/hyp/vhe/tlb.c                  |  2 +-
 arch/arm64/kvm/mmu.c                          |  2 +-
 11 files changed, 52 insertions(+), 27 deletions(-)

-- 
2.32.0.605.g8dce9f2422-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/4] arm64: mm: Fix TLBI vs ASID rollover
  2021-08-06 11:31 [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation Will Deacon
@ 2021-08-06 11:31 ` Will Deacon
  2021-08-06 11:59   ` Catalin Marinas
  2021-08-06 11:31 ` [PATCH] of: restricted dma: Don't fail device probe on rmem init failure Will Deacon
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Will Deacon @ 2021-08-06 11:31 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: kernel-team, Will Deacon, Catalin Marinas, Marc Zyngier,
	Jade Alglave, Shameer Kolothum, kvmarm, linux-arch, stable

When switching to an 'mm_struct' for the first time following an ASID
rollover, a new ASID may be allocated and assigned to 'mm->context.id'.
This reassignment can happen concurrently with other operations on the
mm, such as unmapping pages and subsequently issuing TLB invalidation.

Consequently, we need to ensure that (a) accesses to 'mm->context.id'
are atomic and (b) all page-table updates made prior to a TLBI using the
old ASID are guaranteed to be visible to CPUs running with the new ASID.

This was found by inspection after reviewing the VMID changes from
Shameer but it looks like a real (yet hard to hit) bug.

Cc: <stable@vger.kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Jade Alglave <jade.alglave@arm.com>
Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/mmu.h      | 29 +++++++++++++++++++++++++----
 arch/arm64/include/asm/tlbflush.h | 11 ++++++-----
 2 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 75beffe2ee8a..e9c30859f80c 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -27,11 +27,32 @@ typedef struct {
 } mm_context_t;
 
 /*
- * This macro is only used by the TLBI and low-level switch_mm() code,
- * neither of which can race with an ASID change. We therefore don't
- * need to reload the counter using atomic64_read().
+ * We use atomic64_read() here because the ASID for an 'mm_struct' can
+ * be reallocated when scheduling one of its threads following a
+ * rollover event (see new_context() and flush_context()). In this case,
+ * a concurrent TLBI (e.g. via try_to_unmap_one() and ptep_clear_flush())
+ * may use a stale ASID. This is fine in principle as the new ASID is
+ * guaranteed to be clean in the TLB, but the TLBI routines have to take
+ * care to handle the following race:
+ *
+ *    CPU 0                    CPU 1                          CPU 2
+ *
+ *    // ptep_clear_flush(mm)
+ *    xchg_relaxed(pte, 0)
+ *    DSB ISHST
+ *    old = ASID(mm)
+ *         |                                                  <rollover>
+ *         |                   new = new_context(mm)
+ *         \-----------------> atomic_set(mm->context.id, new)
+ *                             cpu_switch_mm(mm)
+ *                             // Hardware walk of pte using new ASID
+ *    TLBI(old)
+ *
+ * In this scenario, the barrier on CPU 0 and the dependency on CPU 1
+ * ensure that the page-table walker on CPU 1 *must* see the invalid PTE
+ * written by CPU 0.
  */
-#define ASID(mm)	((mm)->context.id.counter & 0xffff)
+#define ASID(mm)	(atomic64_read(&(mm)->context.id) & 0xffff)
 
 static inline bool arm64_kernel_unmapped_at_el0(void)
 {
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index cc3f5a33ff9c..36f02892e1df 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -245,9 +245,10 @@ static inline void flush_tlb_all(void)
 
 static inline void flush_tlb_mm(struct mm_struct *mm)
 {
-	unsigned long asid = __TLBI_VADDR(0, ASID(mm));
+	unsigned long asid;
 
 	dsb(ishst);
+	asid = __TLBI_VADDR(0, ASID(mm));
 	__tlbi(aside1is, asid);
 	__tlbi_user(aside1is, asid);
 	dsb(ish);
@@ -256,9 +257,10 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
 static inline void flush_tlb_page_nosync(struct vm_area_struct *vma,
 					 unsigned long uaddr)
 {
-	unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm));
+	unsigned long addr;
 
 	dsb(ishst);
+	addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm));
 	__tlbi(vale1is, addr);
 	__tlbi_user(vale1is, addr);
 }
@@ -283,9 +285,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
 {
 	int num = 0;
 	int scale = 0;
-	unsigned long asid = ASID(vma->vm_mm);
-	unsigned long addr;
-	unsigned long pages;
+	unsigned long asid, addr, pages;
 
 	start = round_down(start, stride);
 	end = round_up(end, stride);
@@ -305,6 +305,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
 	}
 
 	dsb(ishst);
+	asid = ASID(vma->vm_mm);
 
 	/*
 	 * When the CPU does not support TLB range operations, flush the TLB
-- 
2.32.0.605.g8dce9f2422-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH] of: restricted dma: Don't fail device probe on rmem init failure
  2021-08-06 11:31 [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation Will Deacon
  2021-08-06 11:31 ` [PATCH 1/4] arm64: mm: Fix TLBI vs ASID rollover Will Deacon
@ 2021-08-06 11:31 ` Will Deacon
  2021-08-06 11:34   ` Will Deacon
  2021-08-06 11:31 ` [PATCH 2/4] KVM: arm64: Move kern_hyp_va() usage in __load_guest_stage2() into the callers Will Deacon
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Will Deacon @ 2021-08-06 11:31 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: kernel-team, Will Deacon, Catalin Marinas, Marc Zyngier,
	Jade Alglave, Shameer Kolothum, kvmarm, linux-arch, Claire Chang,
	Konrad Rzeszutek Wilk, Robin Murphy, Christoph Hellwig,
	Rob Herring

If CONFIG_DMA_RESTRICTED_POOL=n then probing a device with a reference
to a "restricted-dma-pool" will fail with a reasonably cryptic error:

  | pci-host-generic: probe of 10000.pci failed with error -22

Print a more helpful message in this case and try to continue probing
the device as we do if the kernel doesn't have the restricted DMA patches
applied or either CONFIG_OF_ADDRESS or CONFIG_HAS_DMA =n.

Cc: Claire Chang <tientzu@chromium.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Rob Herring <robh+dt@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 drivers/of/address.c    | 8 ++++----
 drivers/of/device.c     | 2 +-
 drivers/of/of_private.h | 8 +++-----
 3 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index 973257434398..f6bf4b423c2a 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -997,7 +997,7 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map)
 	return ret;
 }
 
-int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
+void of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
 {
 	struct device_node *node, *of_node = dev->of_node;
 	int count, i;
@@ -1022,11 +1022,11 @@ int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
 		 */
 		if (of_device_is_compatible(node, "restricted-dma-pool") &&
 		    of_device_is_available(node))
-			return of_reserved_mem_device_init_by_idx(dev, of_node,
-								  i);
+			break;
 	}
 
-	return 0;
+	if (i != count && of_reserved_mem_device_init_by_idx(dev, of_node, i))
+		dev_warn(dev, "failed to initialise \"restricted-dma-pool\" memory node\n");
 }
 #endif /* CONFIG_HAS_DMA */
 
diff --git a/drivers/of/device.c b/drivers/of/device.c
index 2defdca418ec..258a2b099410 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -166,7 +166,7 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
 	if (!iommu)
-		return of_dma_set_restricted_buffer(dev, np);
+		of_dma_set_restricted_buffer(dev, np);
 
 	return 0;
 }
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index f557bd22b0cf..bc883f69496b 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -163,18 +163,16 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
-int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np);
+void of_dma_set_restricted_buffer(struct device *dev, struct device_node *np);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
-static inline int of_dma_set_restricted_buffer(struct device *dev,
-					       struct device_node *np)
+static inline void of_dma_set_restricted_buffer(struct device *dev,
+						struct device_node *np)
 {
-	/* Do nothing, successfully. */
-	return 0;
 }
 #endif
 
-- 
2.32.0.605.g8dce9f2422-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/4] KVM: arm64: Move kern_hyp_va() usage in __load_guest_stage2() into the callers
  2021-08-06 11:31 [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation Will Deacon
  2021-08-06 11:31 ` [PATCH 1/4] arm64: mm: Fix TLBI vs ASID rollover Will Deacon
  2021-08-06 11:31 ` [PATCH] of: restricted dma: Don't fail device probe on rmem init failure Will Deacon
@ 2021-08-06 11:31 ` Will Deacon
  2021-08-06 11:31 ` [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2() Will Deacon
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Will Deacon @ 2021-08-06 11:31 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: kernel-team, Will Deacon, Catalin Marinas, Marc Zyngier,
	Jade Alglave, Shameer Kolothum, kvmarm, linux-arch

From: Marc Zyngier <maz@kernel.org>

It is a bit awkward to use kern_hyp_va() in __load_guest_stage2(),
specially as the helper is shared between VHE and nVHE.

Instead, move the use of kern_hyp_va() in the nVHE code, and
pass a pointer to the kvm->arch structure instead. Although
this may look a bit awkward, it allows for some further simplification.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jade Alglave <jade.alglave@arm.com>
Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/kvm_mmu.h | 5 +++--
 arch/arm64/kvm/hyp/nvhe/switch.c | 4 +++-
 arch/arm64/kvm/hyp/nvhe/tlb.c    | 2 +-
 arch/arm64/kvm/hyp/vhe/switch.c  | 2 +-
 arch/arm64/kvm/hyp/vhe/tlb.c     | 2 +-
 5 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index b52c5c4b9a3d..05e089653a1a 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -280,9 +280,10 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long
 	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
 }
 
-static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu)
+static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
+						struct kvm_arch *arch)
 {
-	__load_stage2(mmu, kern_hyp_va(mmu->arch)->vtcr);
+	__load_stage2(mmu, arch->vtcr);
 }
 
 static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index f7af9688c1f7..e50a49082923 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -170,6 +170,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
+	struct kvm_s2_mmu *mmu;
 	bool pmu_switch_needed;
 	u64 exit_code;
 
@@ -213,7 +214,8 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 	__sysreg32_restore_state(vcpu);
 	__sysreg_restore_state_nvhe(guest_ctxt);
 
-	__load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu));
+	mmu = kern_hyp_va(vcpu->arch.hw_mmu);
+	__load_guest_stage2(mmu, kern_hyp_va(mmu->arch));
 	__activate_traps(vcpu);
 
 	__hyp_vgic_restore_state(vcpu);
diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
index 38ed0f6f2703..76229407d8f0 100644
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
@@ -39,7 +39,7 @@ static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu,
 	 * ensuring that we always have an ISB, but not two ISBs back
 	 * to back.
 	 */
-	__load_guest_stage2(mmu);
+	__load_guest_stage2(mmu, kern_hyp_va(mmu->arch));
 	asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT));
 }
 
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index b3229924d243..0cb7523a501a 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -128,7 +128,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	 * __load_guest_stage2 configures stage 2 translation, and
 	 * __activate_traps clear HCR_EL2.TGE (among other things).
 	 */
-	__load_guest_stage2(vcpu->arch.hw_mmu);
+	__load_guest_stage2(vcpu->arch.hw_mmu, vcpu->arch.hw_mmu->arch);
 	__activate_traps(vcpu);
 
 	__kvm_adjust_pc(vcpu);
diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c
index 66f17349f0c3..5e9fb3989e0b 100644
--- a/arch/arm64/kvm/hyp/vhe/tlb.c
+++ b/arch/arm64/kvm/hyp/vhe/tlb.c
@@ -53,7 +53,7 @@ static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu,
 	 * place before clearing TGE. __load_guest_stage2() already
 	 * has an ISB in order to deal with this.
 	 */
-	__load_guest_stage2(mmu);
+	__load_guest_stage2(mmu, mmu->arch);
 	val = read_sysreg(hcr_el2);
 	val &= ~HCR_TGE;
 	write_sysreg(val, hcr_el2);
-- 
2.32.0.605.g8dce9f2422-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2()
  2021-08-06 11:31 [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation Will Deacon
                   ` (2 preceding siblings ...)
  2021-08-06 11:31 ` [PATCH 2/4] KVM: arm64: Move kern_hyp_va() usage in __load_guest_stage2() into the callers Will Deacon
@ 2021-08-06 11:31 ` Will Deacon
  2021-08-06 13:40   ` Quentin Perret
  2021-08-06 11:31 ` [PATCH 4/4] KVM: arm64: Upgrade VMID accesses to {READ,WRITE}_ONCE Will Deacon
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Will Deacon @ 2021-08-06 11:31 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: kernel-team, Will Deacon, Catalin Marinas, Marc Zyngier,
	Jade Alglave, Shameer Kolothum, kvmarm, linux-arch

From: Marc Zyngier <maz@kernel.org>

The protected mode relies on a separate helper to load the
S2 context. Move over to the __load_guest_stage2() helper
instead.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jade Alglave <jade.alglave@arm.com>
Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/kvm_mmu.h              | 11 +++--------
 arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +-
 arch/arm64/kvm/hyp/nvhe/mem_protect.c         |  2 +-
 3 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 05e089653a1a..934ef0deff9f 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -267,9 +267,10 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu)
  * Must be called from hyp code running at EL2 with an updated VTTBR
  * and interrupts disabled.
  */
-static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long vtcr)
+static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
+						struct kvm_arch *arch)
 {
-	write_sysreg(vtcr, vtcr_el2);
+	write_sysreg(arch->vtcr, vtcr_el2);
 	write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
 
 	/*
@@ -280,12 +281,6 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long
 	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
 }
 
-static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
-						struct kvm_arch *arch)
-{
-	__load_stage2(mmu, arch->vtcr);
-}
-
 static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
 {
 	return container_of(mmu->arch, struct kvm, arch);
diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
index 9c227d87c36d..a910648bc71b 100644
--- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
+++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
@@ -29,7 +29,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
 static __always_inline void __load_host_stage2(void)
 {
 	if (static_branch_likely(&kvm_protected_mode_initialized))
-		__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
+		__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);
 	else
 		write_sysreg(0, vttbr_el2);
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index d938ce95d3bd..d4e74ca7f876 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -126,7 +126,7 @@ int __pkvm_prot_finalize(void)
 	kvm_flush_dcache_to_poc(params, sizeof(*params));
 
 	write_sysreg(params->hcr_el2, hcr_el2);
-	__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
+	__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);
 
 	/*
 	 * Make sure to have an ISB before the TLB maintenance below but only
-- 
2.32.0.605.g8dce9f2422-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/4] KVM: arm64: Upgrade VMID accesses to {READ,WRITE}_ONCE
  2021-08-06 11:31 [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation Will Deacon
                   ` (3 preceding siblings ...)
  2021-08-06 11:31 ` [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2() Will Deacon
@ 2021-08-06 11:31 ` Will Deacon
  2021-08-06 14:24   ` Quentin Perret
  2021-08-06 16:04 ` (subset) [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation Catalin Marinas
  2021-09-10  9:06 ` Shameerali Kolothum Thodi
  6 siblings, 1 reply; 17+ messages in thread
From: Will Deacon @ 2021-08-06 11:31 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: kernel-team, Will Deacon, Catalin Marinas, Marc Zyngier,
	Jade Alglave, Shameer Kolothum, kvmarm, linux-arch

From: Marc Zyngier <maz@kernel.org>

Since TLB invalidation can run in parallel with VMID allocation,
we need to be careful and avoid any sort of load/store tearing.
Use {READ,WRITE}_ONCE consistently to avoid any surprise.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jade Alglave <jade.alglave@arm.com>
Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/kvm_mmu.h      | 7 ++++++-
 arch/arm64/kvm/arm.c                  | 2 +-
 arch/arm64/kvm/hyp/nvhe/mem_protect.c | 4 ++--
 arch/arm64/kvm/mmu.c                  | 2 +-
 4 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 934ef0deff9f..5828dd8fa738 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -252,6 +252,11 @@ static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa,
 
 #define kvm_phys_to_vttbr(addr)		phys_to_ttbr(addr)
 
+/*
+ * When this is (directly or indirectly) used on the TLB invalidation
+ * path, we rely on a previously issued DSB so that page table updates
+ * and VMID reads are correctly ordered.
+ */
 static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu)
 {
 	struct kvm_vmid *vmid = &mmu->vmid;
@@ -259,7 +264,7 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu)
 	u64 cnp = system_supports_cnp() ? VTTBR_CNP_BIT : 0;
 
 	baddr = mmu->pgd_phys;
-	vmid_field = (u64)vmid->vmid << VTTBR_VMID_SHIFT;
+	vmid_field = (u64)READ_ONCE(vmid->vmid) << VTTBR_VMID_SHIFT;
 	return kvm_phys_to_vttbr(baddr) | vmid_field | cnp;
 }
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e9a2b8f27792..658f76067f46 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -571,7 +571,7 @@ static void update_vmid(struct kvm_vmid *vmid)
 		kvm_call_hyp(__kvm_flush_vm_context);
 	}
 
-	vmid->vmid = kvm_next_vmid;
+	WRITE_ONCE(vmid->vmid, kvm_next_vmid);
 	kvm_next_vmid++;
 	kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1;
 
diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index d4e74ca7f876..55ae97a144b8 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -109,8 +109,8 @@ int kvm_host_prepare_stage2(void *pgt_pool_base)
 	mmu->pgd_phys = __hyp_pa(host_kvm.pgt.pgd);
 	mmu->arch = &host_kvm.arch;
 	mmu->pgt = &host_kvm.pgt;
-	mmu->vmid.vmid_gen = 0;
-	mmu->vmid.vmid = 0;
+	WRITE_ONCE(mmu->vmid.vmid_gen, 0);
+	WRITE_ONCE(mmu->vmid.vmid, 0);
 
 	return 0;
 }
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 3155c9e778f0..b1a6eaec28ff 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -485,7 +485,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu)
 	mmu->arch = &kvm->arch;
 	mmu->pgt = pgt;
 	mmu->pgd_phys = __pa(pgt->pgd);
-	mmu->vmid.vmid_gen = 0;
+	WRITE_ONCE(mmu->vmid.vmid_gen, 0);
 	return 0;
 
 out_destroy_pgtable:
-- 
2.32.0.605.g8dce9f2422-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH] of: restricted dma: Don't fail device probe on rmem init failure
  2021-08-06 11:31 ` [PATCH] of: restricted dma: Don't fail device probe on rmem init failure Will Deacon
@ 2021-08-06 11:34   ` Will Deacon
  0 siblings, 0 replies; 17+ messages in thread
From: Will Deacon @ 2021-08-06 11:34 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: kernel-team, Catalin Marinas, Marc Zyngier, Jade Alglave,
	Shameer Kolothum, kvmarm, linux-arch, Claire Chang,
	Konrad Rzeszutek Wilk, Robin Murphy, Christoph Hellwig,
	Rob Herring

On Fri, Aug 06, 2021 at 12:31:05PM +0100, Will Deacon wrote:
> If CONFIG_DMA_RESTRICTED_POOL=n then probing a device with a reference
> to a "restricted-dma-pool" will fail with a reasonably cryptic error:
> 
>   | pci-host-generic: probe of 10000.pci failed with error -22
> 
> Print a more helpful message in this case and try to continue probing
> the device as we do if the kernel doesn't have the restricted DMA patches
> applied or either CONFIG_OF_ADDRESS or CONFIG_HAS_DMA =n.
> 
> Cc: Claire Chang <tientzu@chromium.org>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Rob Herring <robh+dt@kernel.org>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  drivers/of/address.c    | 8 ++++----
>  drivers/of/device.c     | 2 +-
>  drivers/of/of_private.h | 8 +++-----
>  3 files changed, 8 insertions(+), 10 deletions(-)

Sorry, didn't mean to send this patch a second time, it was still kicking
around in my tree from yesterday and I accidentally picked it up when
sending my TLBI series.

Please ignore.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] arm64: mm: Fix TLBI vs ASID rollover
  2021-08-06 11:31 ` [PATCH 1/4] arm64: mm: Fix TLBI vs ASID rollover Will Deacon
@ 2021-08-06 11:59   ` Catalin Marinas
  2021-08-06 12:42     ` Will Deacon
  0 siblings, 1 reply; 17+ messages in thread
From: Catalin Marinas @ 2021-08-06 11:59 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, kernel-team, Marc Zyngier, Jade Alglave,
	Shameer Kolothum, kvmarm, linux-arch, stable

On Fri, Aug 06, 2021 at 12:31:04PM +0100, Will Deacon wrote:
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 75beffe2ee8a..e9c30859f80c 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -27,11 +27,32 @@ typedef struct {
>  } mm_context_t;
>  
>  /*
> - * This macro is only used by the TLBI and low-level switch_mm() code,
> - * neither of which can race with an ASID change. We therefore don't
> - * need to reload the counter using atomic64_read().
> + * We use atomic64_read() here because the ASID for an 'mm_struct' can
> + * be reallocated when scheduling one of its threads following a
> + * rollover event (see new_context() and flush_context()). In this case,
> + * a concurrent TLBI (e.g. via try_to_unmap_one() and ptep_clear_flush())
> + * may use a stale ASID. This is fine in principle as the new ASID is
> + * guaranteed to be clean in the TLB, but the TLBI routines have to take
> + * care to handle the following race:
> + *
> + *    CPU 0                    CPU 1                          CPU 2
> + *
> + *    // ptep_clear_flush(mm)
> + *    xchg_relaxed(pte, 0)
> + *    DSB ISHST
> + *    old = ASID(mm)

We'd need specs clarified (ARM ARM, cat model) that the DSB ISHST is
sufficient to order the pte write with the subsequent ASID read.
Otherwise the patch looks fine to me:

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] arm64: mm: Fix TLBI vs ASID rollover
  2021-08-06 11:59   ` Catalin Marinas
@ 2021-08-06 12:42     ` Will Deacon
  2021-08-06 12:49       ` Catalin Marinas
  0 siblings, 1 reply; 17+ messages in thread
From: Will Deacon @ 2021-08-06 12:42 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-arm-kernel, kernel-team, Marc Zyngier, Jade Alglave,
	Shameer Kolothum, kvmarm, linux-arch, stable

On Fri, Aug 06, 2021 at 12:59:28PM +0100, Catalin Marinas wrote:
> On Fri, Aug 06, 2021 at 12:31:04PM +0100, Will Deacon wrote:
> > diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> > index 75beffe2ee8a..e9c30859f80c 100644
> > --- a/arch/arm64/include/asm/mmu.h
> > +++ b/arch/arm64/include/asm/mmu.h
> > @@ -27,11 +27,32 @@ typedef struct {
> >  } mm_context_t;
> >  
> >  /*
> > - * This macro is only used by the TLBI and low-level switch_mm() code,
> > - * neither of which can race with an ASID change. We therefore don't
> > - * need to reload the counter using atomic64_read().
> > + * We use atomic64_read() here because the ASID for an 'mm_struct' can
> > + * be reallocated when scheduling one of its threads following a
> > + * rollover event (see new_context() and flush_context()). In this case,
> > + * a concurrent TLBI (e.g. via try_to_unmap_one() and ptep_clear_flush())
> > + * may use a stale ASID. This is fine in principle as the new ASID is
> > + * guaranteed to be clean in the TLB, but the TLBI routines have to take
> > + * care to handle the following race:
> > + *
> > + *    CPU 0                    CPU 1                          CPU 2
> > + *
> > + *    // ptep_clear_flush(mm)
> > + *    xchg_relaxed(pte, 0)
> > + *    DSB ISHST
> > + *    old = ASID(mm)
> 
> We'd need specs clarified (ARM ARM, cat model) that the DSB ISHST is
> sufficient to order the pte write with the subsequent ASID read.

Although I agree that the cat model needs updating and also that the Arm
ARM isn't helpful by trying to define DMB and DSB at the same time, it
does clearly state the following:

  // B2-149
  | A DSB instruction executed by a PE, PEe, completes when all of the
  | following apply:
  |
  | * All explicit memory accesses of the required access types appearing
  |   in program order before the DSB are complete for the set of observers
  |   in the required shareability domain.

  [...]

  // B2-150
  | In addition, no instruction that appears in program order after the
  | DSB instruction can alter any state of the system or perform any part
  | of its functionality until the DSB completes other than:
  |
  | * Being fetched from memory and decoded.
  | * Reading the general-purpose, SIMD and floating-point, Special-purpose,
  |   or System registers that are directly or indirectly read without
  |   causing side-effects.

Which means that the ASID read cannot return its data before the DSB ISHST
has completed and the DSB ISHST cannot complete until the PTE write has
completed.

> Otherwise the patch looks fine to me:
> 
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks! Do you want to queue it for 5.15? I don't think there's a need to
rush it into 5.14 given that we don't have any evidence of it happening
in practice.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] arm64: mm: Fix TLBI vs ASID rollover
  2021-08-06 12:42     ` Will Deacon
@ 2021-08-06 12:49       ` Catalin Marinas
  0 siblings, 0 replies; 17+ messages in thread
From: Catalin Marinas @ 2021-08-06 12:49 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, kernel-team, Marc Zyngier, Jade Alglave,
	Shameer Kolothum, kvmarm, linux-arch, stable

On Fri, Aug 06, 2021 at 01:42:42PM +0100, Will Deacon wrote:
> On Fri, Aug 06, 2021 at 12:59:28PM +0100, Catalin Marinas wrote:
> > On Fri, Aug 06, 2021 at 12:31:04PM +0100, Will Deacon wrote:
> > > diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> > > index 75beffe2ee8a..e9c30859f80c 100644
> > > --- a/arch/arm64/include/asm/mmu.h
> > > +++ b/arch/arm64/include/asm/mmu.h
> > > @@ -27,11 +27,32 @@ typedef struct {
> > >  } mm_context_t;
> > >  
> > >  /*
> > > - * This macro is only used by the TLBI and low-level switch_mm() code,
> > > - * neither of which can race with an ASID change. We therefore don't
> > > - * need to reload the counter using atomic64_read().
> > > + * We use atomic64_read() here because the ASID for an 'mm_struct' can
> > > + * be reallocated when scheduling one of its threads following a
> > > + * rollover event (see new_context() and flush_context()). In this case,
> > > + * a concurrent TLBI (e.g. via try_to_unmap_one() and ptep_clear_flush())
> > > + * may use a stale ASID. This is fine in principle as the new ASID is
> > > + * guaranteed to be clean in the TLB, but the TLBI routines have to take
> > > + * care to handle the following race:
> > > + *
> > > + *    CPU 0                    CPU 1                          CPU 2
> > > + *
> > > + *    // ptep_clear_flush(mm)
> > > + *    xchg_relaxed(pte, 0)
> > > + *    DSB ISHST
> > > + *    old = ASID(mm)
> > 
> > We'd need specs clarified (ARM ARM, cat model) that the DSB ISHST is
> > sufficient to order the pte write with the subsequent ASID read.
> 
> Although I agree that the cat model needs updating and also that the Arm
> ARM isn't helpful by trying to define DMB and DSB at the same time, it
> does clearly state the following:
> 
>   // B2-149
>   | A DSB instruction executed by a PE, PEe, completes when all of the
>   | following apply:
>   |
>   | * All explicit memory accesses of the required access types appearing
>   |   in program order before the DSB are complete for the set of observers
>   |   in the required shareability domain.
> 
>   [...]
> 
>   // B2-150
>   | In addition, no instruction that appears in program order after the
>   | DSB instruction can alter any state of the system or perform any part
>   | of its functionality until the DSB completes other than:
>   |
>   | * Being fetched from memory and decoded.
>   | * Reading the general-purpose, SIMD and floating-point, Special-purpose,
>   |   or System registers that are directly or indirectly read without
>   |   causing side-effects.
> 
> Which means that the ASID read cannot return its data before the DSB ISHST
> has completed and the DSB ISHST cannot complete until the PTE write has
> completed.

Thanks for the explanation.

> > Otherwise the patch looks fine to me:
> > 
> > Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> 
> Thanks! Do you want to queue it for 5.15? I don't think there's a need to
> rush it into 5.14 given that we don't have any evidence of it happening
> in practice.

Happy to queue it for 5.15.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2()
  2021-08-06 11:31 ` [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2() Will Deacon
@ 2021-08-06 13:40   ` Quentin Perret
  2021-08-20  8:01     ` Marc Zyngier
  0 siblings, 1 reply; 17+ messages in thread
From: Quentin Perret @ 2021-08-06 13:40 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, kernel-team, Catalin Marinas, Marc Zyngier,
	Jade Alglave, Shameer Kolothum, kvmarm, linux-arch

On Friday 06 Aug 2021 at 12:31:07 (+0100), Will Deacon wrote:
> From: Marc Zyngier <maz@kernel.org>
> 
> The protected mode relies on a separate helper to load the
> S2 context. Move over to the __load_guest_stage2() helper
> instead.
> 
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Jade Alglave <jade.alglave@arm.com>
> Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_mmu.h              | 11 +++--------
>  arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +-
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c         |  2 +-
>  3 files changed, 5 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 05e089653a1a..934ef0deff9f 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -267,9 +267,10 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu)
>   * Must be called from hyp code running at EL2 with an updated VTTBR
>   * and interrupts disabled.
>   */
> -static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long vtcr)
> +static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
> +						struct kvm_arch *arch)
>  {
> -	write_sysreg(vtcr, vtcr_el2);
> +	write_sysreg(arch->vtcr, vtcr_el2);
>  	write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
>  
>  	/*
> @@ -280,12 +281,6 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long
>  	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
>  }
>  
> -static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
> -						struct kvm_arch *arch)
> -{
> -	__load_stage2(mmu, arch->vtcr);
> -}
> -
>  static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
>  {
>  	return container_of(mmu->arch, struct kvm, arch);
> diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> index 9c227d87c36d..a910648bc71b 100644
> --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> @@ -29,7 +29,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
>  static __always_inline void __load_host_stage2(void)
>  {
>  	if (static_branch_likely(&kvm_protected_mode_initialized))
> -		__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
> +		__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);
>  	else
>  		write_sysreg(0, vttbr_el2);
>  }
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index d938ce95d3bd..d4e74ca7f876 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -126,7 +126,7 @@ int __pkvm_prot_finalize(void)
>  	kvm_flush_dcache_to_poc(params, sizeof(*params));
>  
>  	write_sysreg(params->hcr_el2, hcr_el2);
> -	__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
> +	__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);

Nit: clearly we're not loading a guest stage-2 here, so maybe the
function should take a more generic name?

Thanks,
Quentin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] KVM: arm64: Upgrade VMID accesses to {READ,WRITE}_ONCE
  2021-08-06 11:31 ` [PATCH 4/4] KVM: arm64: Upgrade VMID accesses to {READ,WRITE}_ONCE Will Deacon
@ 2021-08-06 14:24   ` Quentin Perret
  0 siblings, 0 replies; 17+ messages in thread
From: Quentin Perret @ 2021-08-06 14:24 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, kernel-team, Catalin Marinas, Marc Zyngier,
	Jade Alglave, Shameer Kolothum, kvmarm, linux-arch

On Friday 06 Aug 2021 at 12:31:08 (+0100), Will Deacon wrote:
> From: Marc Zyngier <maz@kernel.org>
> 
> Since TLB invalidation can run in parallel with VMID allocation,
> we need to be careful and avoid any sort of load/store tearing.
> Use {READ,WRITE}_ONCE consistently to avoid any surprise.
> 
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Jade Alglave <jade.alglave@arm.com>
> Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_mmu.h      | 7 ++++++-
>  arch/arm64/kvm/arm.c                  | 2 +-
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c | 4 ++--
>  arch/arm64/kvm/mmu.c                  | 2 +-
>  4 files changed, 10 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 934ef0deff9f..5828dd8fa738 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -252,6 +252,11 @@ static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa,
>  
>  #define kvm_phys_to_vttbr(addr)		phys_to_ttbr(addr)
>  
> +/*
> + * When this is (directly or indirectly) used on the TLB invalidation
> + * path, we rely on a previously issued DSB so that page table updates
> + * and VMID reads are correctly ordered.
> + */
>  static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu)
>  {
>  	struct kvm_vmid *vmid = &mmu->vmid;
> @@ -259,7 +264,7 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu)
>  	u64 cnp = system_supports_cnp() ? VTTBR_CNP_BIT : 0;
>  
>  	baddr = mmu->pgd_phys;
> -	vmid_field = (u64)vmid->vmid << VTTBR_VMID_SHIFT;
> +	vmid_field = (u64)READ_ONCE(vmid->vmid) << VTTBR_VMID_SHIFT;
>  	return kvm_phys_to_vttbr(baddr) | vmid_field | cnp;
>  }
>  
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index e9a2b8f27792..658f76067f46 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -571,7 +571,7 @@ static void update_vmid(struct kvm_vmid *vmid)
>  		kvm_call_hyp(__kvm_flush_vm_context);
>  	}
>  
> -	vmid->vmid = kvm_next_vmid;
> +	WRITE_ONCE(vmid->vmid, kvm_next_vmid);
>  	kvm_next_vmid++;
>  	kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1;
>  
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index d4e74ca7f876..55ae97a144b8 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -109,8 +109,8 @@ int kvm_host_prepare_stage2(void *pgt_pool_base)
>  	mmu->pgd_phys = __hyp_pa(host_kvm.pgt.pgd);
>  	mmu->arch = &host_kvm.arch;
>  	mmu->pgt = &host_kvm.pgt;
> -	mmu->vmid.vmid_gen = 0;
> -	mmu->vmid.vmid = 0;
> +	WRITE_ONCE(mmu->vmid.vmid_gen, 0);
> +	WRITE_ONCE(mmu->vmid.vmid, 0);

I'm guessing it should be safe to omit those? But they certainly don't
harm and can serve as documentation anyway, so:

Reviewed-by: Quentin Perret <qperret@google.com>

Thanks,
Quentin

>  
>  	return 0;
>  }
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 3155c9e778f0..b1a6eaec28ff 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -485,7 +485,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu)
>  	mmu->arch = &kvm->arch;
>  	mmu->pgt = pgt;
>  	mmu->pgd_phys = __pa(pgt->pgd);
> -	mmu->vmid.vmid_gen = 0;
> +	WRITE_ONCE(mmu->vmid.vmid_gen, 0);
>  	return 0;
>  
>  out_destroy_pgtable:
> -- 
> 2.32.0.605.g8dce9f2422-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: (subset) [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation
  2021-08-06 11:31 [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation Will Deacon
                   ` (4 preceding siblings ...)
  2021-08-06 11:31 ` [PATCH 4/4] KVM: arm64: Upgrade VMID accesses to {READ,WRITE}_ONCE Will Deacon
@ 2021-08-06 16:04 ` Catalin Marinas
  2021-09-10  9:06 ` Shameerali Kolothum Thodi
  6 siblings, 0 replies; 17+ messages in thread
From: Catalin Marinas @ 2021-08-06 16:04 UTC (permalink / raw)
  To: Will Deacon, linux-arm-kernel
  Cc: Marc Zyngier, linux-arch, kernel-team, Jade Alglave, kvmarm,
	Shameer Kolothum

On Fri, 6 Aug 2021 12:31:03 +0100, Will Deacon wrote:
> While reviewing Shameer's reworked VMID allocator [1] and discussing
> with Marc, we spotted a race between TLB invalidation (which typically
> takes an ASID or VMID argument) and reallocation of ASID/VMID for the
> context being targetted.
> 
> The first patch spells out an example with try_to_unmap_one() in a
> comment, which Catalin has kindly modelled in TLA+ at [2].
> 
> [...]

Applied to arm64 (for-next/misc), thanks!

[1/4] arm64: mm: Fix TLBI vs ASID rollover
      https://git.kernel.org/arm64/c/5e10f9887ed8

-- 
Catalin


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2()
  2021-08-06 13:40   ` Quentin Perret
@ 2021-08-20  8:01     ` Marc Zyngier
  2021-08-20  9:08       ` Quentin Perret
  0 siblings, 1 reply; 17+ messages in thread
From: Marc Zyngier @ 2021-08-20  8:01 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Will Deacon, linux-arm-kernel, kernel-team, Catalin Marinas,
	Jade Alglave, Shameer Kolothum, kvmarm, linux-arch

On Fri, 06 Aug 2021 14:40:00 +0100,
Quentin Perret <qperret@google.com> wrote:
> 
> On Friday 06 Aug 2021 at 12:31:07 (+0100), Will Deacon wrote:
> > From: Marc Zyngier <maz@kernel.org>
> > 
> > The protected mode relies on a separate helper to load the
> > S2 context. Move over to the __load_guest_stage2() helper
> > instead.
> > 
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Jade Alglave <jade.alglave@arm.com>
> > Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_mmu.h              | 11 +++--------
> >  arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +-
> >  arch/arm64/kvm/hyp/nvhe/mem_protect.c         |  2 +-
> >  3 files changed, 5 insertions(+), 10 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> > index 05e089653a1a..934ef0deff9f 100644
> > --- a/arch/arm64/include/asm/kvm_mmu.h
> > +++ b/arch/arm64/include/asm/kvm_mmu.h
> > @@ -267,9 +267,10 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu)
> >   * Must be called from hyp code running at EL2 with an updated VTTBR
> >   * and interrupts disabled.
> >   */
> > -static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long vtcr)
> > +static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
> > +						struct kvm_arch *arch)
> >  {
> > -	write_sysreg(vtcr, vtcr_el2);
> > +	write_sysreg(arch->vtcr, vtcr_el2);
> >  	write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
> >  
> >  	/*
> > @@ -280,12 +281,6 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long
> >  	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
> >  }
> >  
> > -static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
> > -						struct kvm_arch *arch)
> > -{
> > -	__load_stage2(mmu, arch->vtcr);
> > -}
> > -
> >  static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
> >  {
> >  	return container_of(mmu->arch, struct kvm, arch);
> > diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > index 9c227d87c36d..a910648bc71b 100644
> > --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > @@ -29,7 +29,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
> >  static __always_inline void __load_host_stage2(void)
> >  {
> >  	if (static_branch_likely(&kvm_protected_mode_initialized))
> > -		__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
> > +		__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);
> >  	else
> >  		write_sysreg(0, vttbr_el2);
> >  }
> > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > index d938ce95d3bd..d4e74ca7f876 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > @@ -126,7 +126,7 @@ int __pkvm_prot_finalize(void)
> >  	kvm_flush_dcache_to_poc(params, sizeof(*params));
> >  
> >  	write_sysreg(params->hcr_el2, hcr_el2);
> > -	__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
> > +	__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);
> 
> Nit: clearly we're not loading a guest stage-2 here, so maybe the
> function should take a more generic name?

How about we rename __load_guest_stage2() to __load_stage2() instead,
with the same parameters?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2()
  2021-08-20  8:01     ` Marc Zyngier
@ 2021-08-20  9:08       ` Quentin Perret
  0 siblings, 0 replies; 17+ messages in thread
From: Quentin Perret @ 2021-08-20  9:08 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Will Deacon, linux-arm-kernel, kernel-team, Catalin Marinas,
	Jade Alglave, Shameer Kolothum, kvmarm, linux-arch

On Friday 20 Aug 2021 at 09:01:41 (+0100), Marc Zyngier wrote:
> On Fri, 06 Aug 2021 14:40:00 +0100,
> Quentin Perret <qperret@google.com> wrote:
> > 
> > On Friday 06 Aug 2021 at 12:31:07 (+0100), Will Deacon wrote:
> > > From: Marc Zyngier <maz@kernel.org>
> > > 
> > > The protected mode relies on a separate helper to load the
> > > S2 context. Move over to the __load_guest_stage2() helper
> > > instead.
> > > 
> > > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > > Cc: Jade Alglave <jade.alglave@arm.com>
> > > Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > Signed-off-by: Will Deacon <will@kernel.org>
> > > ---
> > >  arch/arm64/include/asm/kvm_mmu.h              | 11 +++--------
> > >  arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +-
> > >  arch/arm64/kvm/hyp/nvhe/mem_protect.c         |  2 +-
> > >  3 files changed, 5 insertions(+), 10 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> > > index 05e089653a1a..934ef0deff9f 100644
> > > --- a/arch/arm64/include/asm/kvm_mmu.h
> > > +++ b/arch/arm64/include/asm/kvm_mmu.h
> > > @@ -267,9 +267,10 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu)
> > >   * Must be called from hyp code running at EL2 with an updated VTTBR
> > >   * and interrupts disabled.
> > >   */
> > > -static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long vtcr)
> > > +static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
> > > +						struct kvm_arch *arch)
> > >  {
> > > -	write_sysreg(vtcr, vtcr_el2);
> > > +	write_sysreg(arch->vtcr, vtcr_el2);
> > >  	write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
> > >  
> > >  	/*
> > > @@ -280,12 +281,6 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long
> > >  	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
> > >  }
> > >  
> > > -static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
> > > -						struct kvm_arch *arch)
> > > -{
> > > -	__load_stage2(mmu, arch->vtcr);
> > > -}
> > > -
> > >  static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
> > >  {
> > >  	return container_of(mmu->arch, struct kvm, arch);
> > > diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > > index 9c227d87c36d..a910648bc71b 100644
> > > --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > > +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > > @@ -29,7 +29,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
> > >  static __always_inline void __load_host_stage2(void)
> > >  {
> > >  	if (static_branch_likely(&kvm_protected_mode_initialized))
> > > -		__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
> > > +		__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);
> > >  	else
> > >  		write_sysreg(0, vttbr_el2);
> > >  }
> > > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > > index d938ce95d3bd..d4e74ca7f876 100644
> > > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > > @@ -126,7 +126,7 @@ int __pkvm_prot_finalize(void)
> > >  	kvm_flush_dcache_to_poc(params, sizeof(*params));
> > >  
> > >  	write_sysreg(params->hcr_el2, hcr_el2);
> > > -	__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
> > > +	__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);
> > 
> > Nit: clearly we're not loading a guest stage-2 here, so maybe the
> > function should take a more generic name?
> 
> How about we rename __load_guest_stage2() to __load_stage2() instead,
> with the same parameters?

Yep, that'd work for me.

Thanks,
Quentin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation
  2021-08-06 11:31 [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation Will Deacon
                   ` (5 preceding siblings ...)
  2021-08-06 16:04 ` (subset) [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation Catalin Marinas
@ 2021-09-10  9:06 ` Shameerali Kolothum Thodi
  2021-09-10  9:45   ` Catalin Marinas
  6 siblings, 1 reply; 17+ messages in thread
From: Shameerali Kolothum Thodi @ 2021-09-10  9:06 UTC (permalink / raw)
  To: Will Deacon, linux-arm-kernel, Catalin Marinas
  Cc: kernel-team, Marc Zyngier, Jade Alglave, kvmarm, linux-arch,
	Jonathan Cameron



> -----Original Message-----
> From: Will Deacon [mailto:will@kernel.org]
> Sent: 06 August 2021 12:31
> To: linux-arm-kernel@lists.infradead.org
> Cc: kernel-team@android.com; Will Deacon <will@kernel.org>; Catalin
> Marinas <catalin.marinas@arm.com>; Marc Zyngier <maz@kernel.org>; Jade
> Alglave <jade.alglave@arm.com>; Shameerali Kolothum Thodi
> <shameerali.kolothum.thodi@huawei.com>; kvmarm@lists.cs.columbia.edu;
> linux-arch@vger.kernel.org
> Subject: [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation
> 
> Hi all,
> 
> While reviewing Shameer's reworked VMID allocator [1] and discussing
> with Marc, we spotted a race between TLB invalidation (which typically
> takes an ASID or VMID argument) and reallocation of ASID/VMID for the
> context being targetted.
> 
> The first patch spells out an example with try_to_unmap_one() in a
> comment, which Catalin has kindly modelled in TLA+ at [2].
> 
> Although I'm posting all this together for ease of review, the intention
> is that the first patch will go via arm64 with the latter going via kvm.
> 
> Cheers,
> 
> Will
> 
> [1]
> https://lore.kernel.org/r/20210729104009.382-1-shameerali.kolothum.thodi
> @huawei.com
> [2]
> https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/commi
> t/

Hi Catalin,

I am going through the ASID TLA+ model and in the above commit, it appears that the
different ASID check(=> ActiveAsid(c1) # ActiveAsid(c2)) for the Invariant
UniqueASIDActiveTask is now removed.

Just wondering why that is not relevant anymore?

Thanks,
Shameer

> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Jade Alglave <jade.alglave@arm.com>
> Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
> Cc: <kvmarm@lists.cs.columbia.edu>
> Cc: <linux-arch@vger.kernel.org>
> 
> --->8
> 
> Marc Zyngier (3):
>   KVM: arm64: Move kern_hyp_va() usage in __load_guest_stage2() into the
>     callers
>   KVM: arm64: Convert the host S2 over to __load_guest_stage2()
>   KVM: arm64: Upgrade VMID accesses to {READ,WRITE}_ONCE
> 
> Will Deacon (1):
>   arm64: mm: Fix TLBI vs ASID rollover
> 
>  arch/arm64/include/asm/kvm_mmu.h              | 17 ++++++-----
>  arch/arm64/include/asm/mmu.h                  | 29
> ++++++++++++++++---
>  arch/arm64/include/asm/tlbflush.h             | 11 +++----
>  arch/arm64/kvm/arm.c                          |  2 +-
>  arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +-
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c         |  6 ++--
>  arch/arm64/kvm/hyp/nvhe/switch.c              |  4 ++-
>  arch/arm64/kvm/hyp/nvhe/tlb.c                 |  2 +-
>  arch/arm64/kvm/hyp/vhe/switch.c               |  2 +-
>  arch/arm64/kvm/hyp/vhe/tlb.c                  |  2 +-
>  arch/arm64/kvm/mmu.c                          |  2 +-
>  11 files changed, 52 insertions(+), 27 deletions(-)
> 
> --
> 2.32.0.605.g8dce9f2422-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation
  2021-09-10  9:06 ` Shameerali Kolothum Thodi
@ 2021-09-10  9:45   ` Catalin Marinas
  0 siblings, 0 replies; 17+ messages in thread
From: Catalin Marinas @ 2021-09-10  9:45 UTC (permalink / raw)
  To: Shameerali Kolothum Thodi
  Cc: Will Deacon, linux-arm-kernel, kernel-team, Marc Zyngier,
	Jade Alglave, kvmarm, linux-arch, Jonathan Cameron

On Fri, Sep 10, 2021 at 09:06:31AM +0000, Shameerali Kolothum Thodi wrote:
> > [2] https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/commit/
> 
> I am going through the ASID TLA+ model and in the above commit, it appears that the
> different ASID check(=> ActiveAsid(c1) # ActiveAsid(c2)) for the Invariant
> UniqueASIDActiveTask is now removed.
> 
> Just wondering why that is not relevant anymore?

It's still relevant. I probably deleted it by mistake, I'll add it back
now. Thanks for carefully looking at this commit.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-09-10  9:47 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-06 11:31 [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation Will Deacon
2021-08-06 11:31 ` [PATCH 1/4] arm64: mm: Fix TLBI vs ASID rollover Will Deacon
2021-08-06 11:59   ` Catalin Marinas
2021-08-06 12:42     ` Will Deacon
2021-08-06 12:49       ` Catalin Marinas
2021-08-06 11:31 ` [PATCH] of: restricted dma: Don't fail device probe on rmem init failure Will Deacon
2021-08-06 11:34   ` Will Deacon
2021-08-06 11:31 ` [PATCH 2/4] KVM: arm64: Move kern_hyp_va() usage in __load_guest_stage2() into the callers Will Deacon
2021-08-06 11:31 ` [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2() Will Deacon
2021-08-06 13:40   ` Quentin Perret
2021-08-20  8:01     ` Marc Zyngier
2021-08-20  9:08       ` Quentin Perret
2021-08-06 11:31 ` [PATCH 4/4] KVM: arm64: Upgrade VMID accesses to {READ,WRITE}_ONCE Will Deacon
2021-08-06 14:24   ` Quentin Perret
2021-08-06 16:04 ` (subset) [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation Catalin Marinas
2021-09-10  9:06 ` Shameerali Kolothum Thodi
2021-09-10  9:45   ` Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).