linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled
@ 2022-08-10 19:30 Peter Collingbourne
  2022-08-10 19:30 ` [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics Peter Collingbourne
                   ` (7 more replies)
  0 siblings, 8 replies; 32+ messages in thread
From: Peter Collingbourne @ 2022-08-10 19:30 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: Peter Collingbourne, Cornelia Huck, Catalin Marinas, Will Deacon,
	Marc Zyngier, Evgenii Stepanov, kvm, Steven Price,
	Vincenzo Frascino

Hi,

This patch series allows VMMs to use shared mappings in MTE enabled
guests. The first four patches are based on the series that Catalin sent
out, whose cover letter [1] I quote from below:

> This series aims to fix the races between initialising the tags on a
> page and setting the PG_mte_tagged flag. Currently the flag is set
> either before or after that tag initialisation and this can lead to CoW
> copying stale tags. The first patch moves the flag setting after the
> tags have been initialised, solving the CoW issue. However, concurrent
> mprotect() on a shared mapping may (very rarely) lead to valid tags
> being zeroed.
>
> The second skips the sanitise_mte_tags() call in kvm_set_spte_gfn(),
> deferring it to user_mem_abort(). The outcome is that no
> sanitise_mte_tags() can be simplified to skip the pfn_to_online_page()
> check and only rely on VM_MTE_ALLOWED vma flag that can be checked in
> user_mem_abort().
>
> The third and fourth patches use PG_arch_3 as a lock for page tagging,
> based on Peter Collingbourne's idea of a two-bit lock.
>
> I think the first patch can be queued but the rest needs some in depth
> review and test. With this series (if correct) we could allos MAP_SHARED
> on KVM guest memory but this is to be discussed separately as there are
> some KVM ABI implications.

I rebased Catalin's series onto -next, addressed the issues that I
identified in the review and added the proposed userspace enablement
patches after the series.

[1] https://lore.kernel.org/all/20220705142619.4135905-1-catalin.marinas@arm.com/

Catalin Marinas (3):
  arm64: mte: Fix/clarify the PG_mte_tagged semantics
  KVM: arm64: Simplify the sanitise_mte_tags() logic
  arm64: mte: Lock a page for MTE tag initialisation

Peter Collingbourne (4):
  mm: Add PG_arch_3 page flag
  KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled
  KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled
  Documentation: document the ABI changes for KVM_CAP_ARM_MTE

 Documentation/virt/kvm/api.rst    |  5 ++-
 arch/arm64/include/asm/mte.h      | 62 +++++++++++++++++++++++++++++++
 arch/arm64/include/asm/pgtable.h  |  3 +-
 arch/arm64/kernel/cpufeature.c    |  4 +-
 arch/arm64/kernel/elfcore.c       |  2 +-
 arch/arm64/kernel/hibernate.c     |  2 +-
 arch/arm64/kernel/mte.c           | 17 ++++++---
 arch/arm64/kvm/guest.c            | 18 +++++----
 arch/arm64/kvm/mmu.c              | 55 +++++++++++----------------
 arch/arm64/mm/copypage.c          |  6 ++-
 arch/arm64/mm/fault.c             |  4 +-
 arch/arm64/mm/mteswap.c           |  5 ++-
 fs/proc/page.c                    |  1 +
 include/linux/kernel-page-flags.h |  1 +
 include/linux/page-flags.h        |  1 +
 include/trace/events/mmflags.h    |  7 ++--
 mm/huge_memory.c                  |  1 +
 tools/vm/page-types.c             |  2 +
 18 files changed, 137 insertions(+), 59 deletions(-)

-- 
2.37.1.559.g78731f0fdb-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics
  2022-08-10 19:30 [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Peter Collingbourne
@ 2022-08-10 19:30 ` Peter Collingbourne
  2022-09-01 15:49   ` Catalin Marinas
                     ` (2 more replies)
  2022-08-10 19:30 ` [PATCH v3 2/7] KVM: arm64: Simplify the sanitise_mte_tags() logic Peter Collingbourne
                   ` (6 subsequent siblings)
  7 siblings, 3 replies; 32+ messages in thread
From: Peter Collingbourne @ 2022-08-10 19:30 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: Catalin Marinas, Cornelia Huck, Will Deacon, Marc Zyngier,
	Evgenii Stepanov, kvm, Steven Price, Vincenzo Frascino,
	Peter Collingbourne

From: Catalin Marinas <catalin.marinas@arm.com>

Currently the PG_mte_tagged page flag mostly means the page contains
valid tags and it should be set after the tags have been cleared or
restored. However, in mte_sync_tags() it is set before setting the tags
to avoid, in theory, a race with concurrent mprotect(PROT_MTE) for
shared pages. However, a concurrent mprotect(PROT_MTE) with a copy on
write in another thread can cause the new page to have stale tags.
Similarly, tag reading via ptrace() can read stale tags of the
PG_mte_tagged flag is set before actually clearing/restoring the tags.

Fix the PG_mte_tagged semantics so that it is only set after the tags
have been cleared or restored. This is safe for swap restoring into a
MAP_SHARED or CoW page since the core code takes the page lock. Add two
functions to test and set the PG_mte_tagged flag with acquire and
release semantics. The downside is that concurrent mprotect(PROT_MTE) on
a MAP_SHARED page may cause tag loss. This is already the case for KVM
guests if a VMM changes the page protection while the guest triggers a
user_mem_abort().

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Steven Price <steven.price@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
---
v3:
- fix build with CONFIG_ARM64_MTE disabled

 arch/arm64/include/asm/mte.h     | 30 ++++++++++++++++++++++++++++++
 arch/arm64/include/asm/pgtable.h |  2 +-
 arch/arm64/kernel/cpufeature.c   |  4 +++-
 arch/arm64/kernel/elfcore.c      |  2 +-
 arch/arm64/kernel/hibernate.c    |  2 +-
 arch/arm64/kernel/mte.c          | 12 +++++++-----
 arch/arm64/kvm/guest.c           |  4 ++--
 arch/arm64/kvm/mmu.c             |  4 ++--
 arch/arm64/mm/copypage.c         |  4 ++--
 arch/arm64/mm/fault.c            |  2 +-
 arch/arm64/mm/mteswap.c          |  2 +-
 11 files changed, 51 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index aa523591a44e..46618c575eac 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -37,6 +37,29 @@ void mte_free_tag_storage(char *storage);
 /* track which pages have valid allocation tags */
 #define PG_mte_tagged	PG_arch_2
 
+static inline void set_page_mte_tagged(struct page *page)
+{
+	/*
+	 * Ensure that the tags written prior to this function are visible
+	 * before the page flags update.
+	 */
+	smp_wmb();
+	set_bit(PG_mte_tagged, &page->flags);
+}
+
+static inline bool page_mte_tagged(struct page *page)
+{
+	bool ret = test_bit(PG_mte_tagged, &page->flags);
+
+	/*
+	 * If the page is tagged, ensure ordering with a likely subsequent
+	 * read of the tags.
+	 */
+	if (ret)
+		smp_rmb();
+	return ret;
+}
+
 void mte_zero_clear_page_tags(void *addr);
 void mte_sync_tags(pte_t old_pte, pte_t pte);
 void mte_copy_page_tags(void *kto, const void *kfrom);
@@ -54,6 +77,13 @@ size_t mte_probe_user_range(const char __user *uaddr, size_t size);
 /* unused if !CONFIG_ARM64_MTE, silence the compiler */
 #define PG_mte_tagged	0
 
+static inline void set_page_mte_tagged(struct page *page)
+{
+}
+static inline bool page_mte_tagged(struct page *page)
+{
+	return false;
+}
 static inline void mte_zero_clear_page_tags(void *addr)
 {
 }
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index b5df82aa99e6..82719fa42c0e 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1050,7 +1050,7 @@ static inline void arch_swap_invalidate_area(int type)
 static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio)
 {
 	if (system_supports_mte() && mte_restore_tags(entry, &folio->page))
-		set_bit(PG_mte_tagged, &folio->flags);
+		set_page_mte_tagged(&folio->page);
 }
 
 #endif /* CONFIG_ARM64_MTE */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 907401e4fffb..562c301bbf15 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2037,8 +2037,10 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
 	 * Clear the tags in the zero page. This needs to be done via the
 	 * linear map which has the Tagged attribute.
 	 */
-	if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags))
+	if (!page_mte_tagged(ZERO_PAGE(0))) {
 		mte_clear_page_tags(lm_alias(empty_zero_page));
+		set_page_mte_tagged(ZERO_PAGE(0));
+	}
 
 	kasan_init_hw_tags_cpu();
 }
diff --git a/arch/arm64/kernel/elfcore.c b/arch/arm64/kernel/elfcore.c
index 98d67444a5b6..f91bb1572d22 100644
--- a/arch/arm64/kernel/elfcore.c
+++ b/arch/arm64/kernel/elfcore.c
@@ -47,7 +47,7 @@ static int mte_dump_tag_range(struct coredump_params *cprm,
 		 * Pages mapped in user space as !pte_access_permitted() (e.g.
 		 * PROT_EXEC only) may not have the PG_mte_tagged flag set.
 		 */
-		if (!test_bit(PG_mte_tagged, &page->flags)) {
+		if (!page_mte_tagged(page)) {
 			put_page(page);
 			dump_skip(cprm, MTE_PAGE_TAG_STORAGE);
 			continue;
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index af5df48ba915..788597a6b6a2 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -271,7 +271,7 @@ static int swsusp_mte_save_tags(void)
 			if (!page)
 				continue;
 
-			if (!test_bit(PG_mte_tagged, &page->flags))
+			if (!page_mte_tagged(page))
 				continue;
 
 			ret = save_tags(page, pfn);
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index b2b730233274..2287316639f3 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -41,14 +41,17 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
 	if (check_swap && is_swap_pte(old_pte)) {
 		swp_entry_t entry = pte_to_swp_entry(old_pte);
 
-		if (!non_swap_entry(entry) && mte_restore_tags(entry, page))
+		if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) {
+			set_page_mte_tagged(page);
 			return;
+		}
 	}
 
 	if (!pte_is_tagged)
 		return;
 
 	mte_clear_page_tags(page_address(page));
+	set_page_mte_tagged(page);
 }
 
 void mte_sync_tags(pte_t old_pte, pte_t pte)
@@ -64,7 +67,7 @@ void mte_sync_tags(pte_t old_pte, pte_t pte)
 
 	/* if PG_mte_tagged is set, tags have already been initialised */
 	for (i = 0; i < nr_pages; i++, page++) {
-		if (!test_and_set_bit(PG_mte_tagged, &page->flags))
+		if (!page_mte_tagged(page))
 			mte_sync_page_tags(page, old_pte, check_swap,
 					   pte_is_tagged);
 	}
@@ -91,8 +94,7 @@ int memcmp_pages(struct page *page1, struct page *page2)
 	 * pages is tagged, set_pte_at() may zero or change the tags of the
 	 * other page via mte_sync_tags().
 	 */
-	if (test_bit(PG_mte_tagged, &page1->flags) ||
-	    test_bit(PG_mte_tagged, &page2->flags))
+	if (page_mte_tagged(page1) || page_mte_tagged(page2))
 		return addr1 != addr2;
 
 	return ret;
@@ -398,7 +400,7 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
 			put_page(page);
 			break;
 		}
-		WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags));
+		WARN_ON_ONCE(!page_mte_tagged(page));
 
 		/* limit access to the end of the page */
 		offset = offset_in_page(addr);
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 8c607199cad1..3b04e69006b4 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -1058,7 +1058,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 		maddr = page_address(page);
 
 		if (!write) {
-			if (test_bit(PG_mte_tagged, &page->flags))
+			if (page_mte_tagged(page))
 				num_tags = mte_copy_tags_to_user(tags, maddr,
 							MTE_GRANULES_PER_PAGE);
 			else
@@ -1075,7 +1075,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 			 * completed fully
 			 */
 			if (num_tags == MTE_GRANULES_PER_PAGE)
-				set_bit(PG_mte_tagged, &page->flags);
+				set_page_mte_tagged(page);
 
 			kvm_release_pfn_dirty(pfn);
 		}
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 87f1cd0df36e..c9012707f69c 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1075,9 +1075,9 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
 		return -EFAULT;
 
 	for (i = 0; i < nr_pages; i++, page++) {
-		if (!test_bit(PG_mte_tagged, &page->flags)) {
+		if (!page_mte_tagged(page)) {
 			mte_clear_page_tags(page_address(page));
-			set_bit(PG_mte_tagged, &page->flags);
+			set_page_mte_tagged(page);
 		}
 	}
 
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index 24913271e898..4223389b6180 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -21,9 +21,9 @@ void copy_highpage(struct page *to, struct page *from)
 
 	copy_page(kto, kfrom);
 
-	if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
-		set_bit(PG_mte_tagged, &to->flags);
+	if (system_supports_mte() && page_mte_tagged(from)) {
 		mte_copy_page_tags(kto, kfrom);
+		set_page_mte_tagged(to);
 	}
 }
 EXPORT_SYMBOL(copy_highpage);
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index c33f1fad2745..d095bfa16771 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -931,5 +931,5 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
 void tag_clear_highpage(struct page *page)
 {
 	mte_zero_clear_page_tags(page_address(page));
-	set_bit(PG_mte_tagged, &page->flags);
+	set_page_mte_tagged(page);
 }
diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
index 4334dec93bd4..a78c1db23c68 100644
--- a/arch/arm64/mm/mteswap.c
+++ b/arch/arm64/mm/mteswap.c
@@ -24,7 +24,7 @@ int mte_save_tags(struct page *page)
 {
 	void *tag_storage, *ret;
 
-	if (!test_bit(PG_mte_tagged, &page->flags))
+	if (!page_mte_tagged(page))
 		return 0;
 
 	tag_storage = mte_allocate_tag_storage();
-- 
2.37.1.559.g78731f0fdb-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 2/7] KVM: arm64: Simplify the sanitise_mte_tags() logic
  2022-08-10 19:30 [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Peter Collingbourne
  2022-08-10 19:30 ` [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics Peter Collingbourne
@ 2022-08-10 19:30 ` Peter Collingbourne
  2022-09-02 14:47   ` Steven Price
  2022-08-10 19:30 ` [PATCH v3 3/7] mm: Add PG_arch_3 page flag Peter Collingbourne
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 32+ messages in thread
From: Peter Collingbourne @ 2022-08-10 19:30 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: Catalin Marinas, Cornelia Huck, Will Deacon, Marc Zyngier,
	Evgenii Stepanov, kvm, Steven Price, Vincenzo Frascino,
	Peter Collingbourne

From: Catalin Marinas <catalin.marinas@arm.com>

Currently sanitise_mte_tags() checks if it's an online page before
attempting to sanitise the tags. Such detection should be done in the
caller via the VM_MTE_ALLOWED vma flag. Since kvm_set_spte_gfn() does
not have the vma, leave the page unmapped if not already tagged. Tag
initialisation will be done on a subsequent access fault in
user_mem_abort().

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Steven Price <steven.price@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
---
 arch/arm64/kvm/mmu.c | 40 +++++++++++++++-------------------------
 1 file changed, 15 insertions(+), 25 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index c9012707f69c..1a3707aeb41f 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1056,23 +1056,14 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
  * - mmap_lock protects between a VM faulting a page in and the VMM performing
  *   an mprotect() to add VM_MTE
  */
-static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
-			     unsigned long size)
+static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
+			      unsigned long size)
 {
 	unsigned long i, nr_pages = size >> PAGE_SHIFT;
-	struct page *page;
+	struct page *page = pfn_to_page(pfn);
 
 	if (!kvm_has_mte(kvm))
-		return 0;
-
-	/*
-	 * pfn_to_online_page() is used to reject ZONE_DEVICE pages
-	 * that may not support tags.
-	 */
-	page = pfn_to_online_page(pfn);
-
-	if (!page)
-		return -EFAULT;
+		return;
 
 	for (i = 0; i < nr_pages; i++, page++) {
 		if (!page_mte_tagged(page)) {
@@ -1080,8 +1071,6 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
 			set_page_mte_tagged(page);
 		}
 	}
-
-	return 0;
 }
 
 static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
@@ -1092,7 +1081,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	bool write_fault, writable, force_pte = false;
 	bool exec_fault;
 	bool device = false;
-	bool shared;
 	unsigned long mmu_seq;
 	struct kvm *kvm = vcpu->kvm;
 	struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
@@ -1142,8 +1130,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 		vma_shift = get_vma_page_shift(vma, hva);
 	}
 
-	shared = (vma->vm_flags & VM_SHARED);
-
 	switch (vma_shift) {
 #ifndef __PAGETABLE_PMD_FOLDED
 	case PUD_SHIFT:
@@ -1264,12 +1250,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 
 	if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) {
 		/* Check the VMM hasn't introduced a new VM_SHARED VMA */
-		if (!shared)
-			ret = sanitise_mte_tags(kvm, pfn, vma_pagesize);
-		else
+		if ((vma->vm_flags & VM_MTE_ALLOWED) &&
+		    !(vma->vm_flags & VM_SHARED)) {
+			sanitise_mte_tags(kvm, pfn, vma_pagesize);
+		} else {
 			ret = -EFAULT;
-		if (ret)
 			goto out_unlock;
+		}
 	}
 
 	if (writable)
@@ -1491,15 +1478,18 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
 bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
 {
 	kvm_pfn_t pfn = pte_pfn(range->pte);
-	int ret;
 
 	if (!kvm->arch.mmu.pgt)
 		return false;
 
 	WARN_ON(range->end - range->start != 1);
 
-	ret = sanitise_mte_tags(kvm, pfn, PAGE_SIZE);
-	if (ret)
+	/*
+	 * If the page isn't tagged, defer to user_mem_abort() for sanitising
+	 * the MTE tags. The S2 pte should have been unmapped by
+	 * mmu_notifier_invalidate_range_end().
+	 */
+	if (kvm_has_mte(kvm) && !page_mte_tagged(pfn_to_page(pfn)))
 		return false;
 
 	/*
-- 
2.37.1.559.g78731f0fdb-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 3/7] mm: Add PG_arch_3 page flag
  2022-08-10 19:30 [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Peter Collingbourne
  2022-08-10 19:30 ` [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics Peter Collingbourne
  2022-08-10 19:30 ` [PATCH v3 2/7] KVM: arm64: Simplify the sanitise_mte_tags() logic Peter Collingbourne
@ 2022-08-10 19:30 ` Peter Collingbourne
  2022-08-11  7:16   ` kernel test robot
  2022-08-10 19:30 ` [PATCH v3 4/7] arm64: mte: Lock a page for MTE tag initialisation Peter Collingbourne
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 32+ messages in thread
From: Peter Collingbourne @ 2022-08-10 19:30 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: Peter Collingbourne, Cornelia Huck, Catalin Marinas, Will Deacon,
	Marc Zyngier, Evgenii Stepanov, kvm, Steven Price,
	Vincenzo Frascino

As with PG_arch_2, this flag is only allowed on 64-bit architectures due
to the shortage of bits available. It will be used by the arm64 MTE code
in subsequent patches.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Steven Price <steven.price@arm.com>
[catalin.marinas@arm.com: added flag preserving in __split_huge_page_tail()]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
v3:
- fix page flag dumping

 fs/proc/page.c                    | 1 +
 include/linux/kernel-page-flags.h | 1 +
 include/linux/page-flags.h        | 1 +
 include/trace/events/mmflags.h    | 7 ++++---
 mm/huge_memory.c                  | 1 +
 tools/vm/page-types.c             | 2 ++
 6 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/fs/proc/page.c b/fs/proc/page.c
index a2873a617ae8..0129aa3cfb7a 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -220,6 +220,7 @@ u64 stable_page_flags(struct page *page)
 	u |= kpf_copy_bit(k, KPF_ARCH,		PG_arch_1);
 #ifdef CONFIG_64BIT
 	u |= kpf_copy_bit(k, KPF_ARCH_2,	PG_arch_2);
+	u |= kpf_copy_bit(k, KPF_ARCH_3,	PG_arch_3);
 #endif
 
 	return u;
diff --git a/include/linux/kernel-page-flags.h b/include/linux/kernel-page-flags.h
index eee1877a354e..859f4b0c1b2b 100644
--- a/include/linux/kernel-page-flags.h
+++ b/include/linux/kernel-page-flags.h
@@ -18,5 +18,6 @@
 #define KPF_UNCACHED		39
 #define KPF_SOFTDIRTY		40
 #define KPF_ARCH_2		41
+#define KPF_ARCH_3		42
 
 #endif /* LINUX_KERNEL_PAGE_FLAGS_H */
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 465ff35a8c00..ad01a3abf6c8 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -134,6 +134,7 @@ enum pageflags {
 #endif
 #ifdef CONFIG_64BIT
 	PG_arch_2,
+	PG_arch_3,
 #endif
 #ifdef CONFIG_KASAN_HW_TAGS
 	PG_skip_kasan_poison,
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index 11524cda4a95..704380179986 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -91,9 +91,9 @@
 #endif
 
 #ifdef CONFIG_64BIT
-#define IF_HAVE_PG_ARCH_2(flag,string) ,{1UL << flag, string}
+#define IF_HAVE_PG_ARCH_2_3(flag,string) ,{1UL << flag, string}
 #else
-#define IF_HAVE_PG_ARCH_2(flag,string)
+#define IF_HAVE_PG_ARCH_2_3(flag,string)
 #endif
 
 #ifdef CONFIG_KASAN_HW_TAGS
@@ -129,7 +129,8 @@ IF_HAVE_PG_UNCACHED(PG_uncached,	"uncached"	)		\
 IF_HAVE_PG_HWPOISON(PG_hwpoison,	"hwpoison"	)		\
 IF_HAVE_PG_IDLE(PG_young,		"young"		)		\
 IF_HAVE_PG_IDLE(PG_idle,		"idle"		)		\
-IF_HAVE_PG_ARCH_2(PG_arch_2,		"arch_2"	)		\
+IF_HAVE_PG_ARCH_2_3(PG_arch_2,		"arch_2"	)		\
+IF_HAVE_PG_ARCH_2_3(PG_arch_3,		"arch_3"	)		\
 IF_HAVE_PG_SKIP_KASAN_POISON(PG_skip_kasan_poison, "skip_kasan_poison")
 
 #define show_page_flags(flags)						\
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 0611b2fd145a..262e9ca627fb 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2399,6 +2399,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
 			 (1L << PG_unevictable) |
 #ifdef CONFIG_64BIT
 			 (1L << PG_arch_2) |
+			 (1L << PG_arch_3) |
 #endif
 			 (1L << PG_dirty)));
 
diff --git a/tools/vm/page-types.c b/tools/vm/page-types.c
index 381dcc00cb62..364373f5bba0 100644
--- a/tools/vm/page-types.c
+++ b/tools/vm/page-types.c
@@ -79,6 +79,7 @@
 #define KPF_UNCACHED		39
 #define KPF_SOFTDIRTY		40
 #define KPF_ARCH_2		41
+#define KPF_ARCH_3		42
 
 /* [47-] take some arbitrary free slots for expanding overloaded flags
  * not part of kernel API
@@ -138,6 +139,7 @@ static const char * const page_flag_names[] = {
 	[KPF_UNCACHED]		= "c:uncached",
 	[KPF_SOFTDIRTY]		= "f:softdirty",
 	[KPF_ARCH_2]		= "H:arch_2",
+	[KPF_ARCH_3]		= "H:arch_3",
 
 	[KPF_ANON_EXCLUSIVE]	= "d:anon_exclusive",
 	[KPF_READAHEAD]		= "I:readahead",
-- 
2.37.1.559.g78731f0fdb-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 4/7] arm64: mte: Lock a page for MTE tag initialisation
  2022-08-10 19:30 [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Peter Collingbourne
                   ` (2 preceding siblings ...)
  2022-08-10 19:30 ` [PATCH v3 3/7] mm: Add PG_arch_3 page flag Peter Collingbourne
@ 2022-08-10 19:30 ` Peter Collingbourne
  2022-09-02 14:47   ` Steven Price
  2022-08-10 19:30 ` [PATCH v3 5/7] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled Peter Collingbourne
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 32+ messages in thread
From: Peter Collingbourne @ 2022-08-10 19:30 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: Catalin Marinas, Cornelia Huck, Will Deacon, Marc Zyngier,
	Evgenii Stepanov, kvm, Steven Price, Vincenzo Frascino,
	Peter Collingbourne

From: Catalin Marinas <catalin.marinas@arm.com>

Initialising the tags and setting PG_mte_tagged flag for a page can race
between multiple set_pte_at() on shared pages or setting the stage 2 pte
via user_mem_abort(). Introduce a new PG_mte_lock flag as PG_arch_3 and
set it before attempting page initialisation. Given that PG_mte_tagged
is never cleared for a page, consider setting this flag to mean page
unlocked and wait on this bit with acquire semantics if the page is
locked:

- try_page_mte_tagging() - lock the page for tagging, return true if it
  can be tagged, false if already tagged. No acquire semantics if it
  returns true (PG_mte_tagged not set) as there is no serialisation with
  a previous set_page_mte_tagged().

- set_page_mte_tagged() - set PG_mte_tagged with release semantics.

The two-bit locking is based on Peter Collingbourne's idea.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Steven Price <steven.price@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
---
 arch/arm64/include/asm/mte.h     | 32 ++++++++++++++++++++++++++++++++
 arch/arm64/include/asm/pgtable.h |  1 +
 arch/arm64/kernel/cpufeature.c   |  2 +-
 arch/arm64/kernel/mte.c          |  7 +++++--
 arch/arm64/kvm/guest.c           | 16 ++++++++++------
 arch/arm64/kvm/mmu.c             |  2 +-
 arch/arm64/mm/copypage.c         |  2 ++
 arch/arm64/mm/fault.c            |  2 ++
 arch/arm64/mm/mteswap.c          |  3 +++
 9 files changed, 57 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index 46618c575eac..ea5158f6f6cb 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -36,6 +36,8 @@ void mte_free_tag_storage(char *storage);
 
 /* track which pages have valid allocation tags */
 #define PG_mte_tagged	PG_arch_2
+/* simple lock to avoid multiple threads tagging the same page */
+#define PG_mte_lock	PG_arch_3
 
 static inline void set_page_mte_tagged(struct page *page)
 {
@@ -60,6 +62,32 @@ static inline bool page_mte_tagged(struct page *page)
 	return ret;
 }
 
+/*
+ * Lock the page for tagging and return 'true' if the page can be tagged,
+ * 'false' if already tagged. PG_mte_tagged is never cleared and therefore the
+ * locking only happens once for page initialisation.
+ *
+ * The page MTE lock state:
+ *
+ *   Locked:	PG_mte_lock && !PG_mte_tagged
+ *   Unlocked:	!PG_mte_lock || PG_mte_tagged
+ *
+ * Acquire semantics only if the page is tagged (returning 'false').
+ */
+static inline bool try_page_mte_tagging(struct page *page)
+{
+	if (!test_and_set_bit(PG_mte_lock, &page->flags))
+		return true;
+
+	/*
+	 * The tags are either being initialised or have already been initialised,
+	 * wait for the PG_mte_tagged flag to be set.
+	 */
+	smp_cond_load_acquire(&page->flags, VAL & (1UL << PG_mte_tagged));
+
+	return false;
+}
+
 void mte_zero_clear_page_tags(void *addr);
 void mte_sync_tags(pte_t old_pte, pte_t pte);
 void mte_copy_page_tags(void *kto, const void *kfrom);
@@ -84,6 +112,10 @@ static inline bool page_mte_tagged(struct page *page)
 {
 	return false;
 }
+static inline bool try_page_mte_tagging(struct page *page)
+{
+	return false;
+}
 static inline void mte_zero_clear_page_tags(void *addr)
 {
 }
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 82719fa42c0e..e6b82ad1e9e6 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1049,6 +1049,7 @@ static inline void arch_swap_invalidate_area(int type)
 #define __HAVE_ARCH_SWAP_RESTORE
 static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio)
 {
+	/* mte_restore_tags() takes the PG_mte_lock */
 	if (system_supports_mte() && mte_restore_tags(entry, &folio->page))
 		set_page_mte_tagged(&folio->page);
 }
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 562c301bbf15..33d342ddef87 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2037,7 +2037,7 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
 	 * Clear the tags in the zero page. This needs to be done via the
 	 * linear map which has the Tagged attribute.
 	 */
-	if (!page_mte_tagged(ZERO_PAGE(0))) {
+	if (try_page_mte_tagging(ZERO_PAGE(0))) {
 		mte_clear_page_tags(lm_alias(empty_zero_page));
 		set_page_mte_tagged(ZERO_PAGE(0));
 	}
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index 2287316639f3..634e089b5933 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -41,6 +41,7 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
 	if (check_swap && is_swap_pte(old_pte)) {
 		swp_entry_t entry = pte_to_swp_entry(old_pte);
 
+		/* mte_restore_tags() takes the PG_mte_lock */
 		if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) {
 			set_page_mte_tagged(page);
 			return;
@@ -50,8 +51,10 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
 	if (!pte_is_tagged)
 		return;
 
-	mte_clear_page_tags(page_address(page));
-	set_page_mte_tagged(page);
+	if (try_page_mte_tagging(page)) {
+		mte_clear_page_tags(page_address(page));
+		set_page_mte_tagged(page);
+	}
 }
 
 void mte_sync_tags(pte_t old_pte, pte_t pte)
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 3b04e69006b4..059b38e7a9e8 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -1067,15 +1067,19 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 					clear_user(tags, MTE_GRANULES_PER_PAGE);
 			kvm_release_pfn_clean(pfn);
 		} else {
+			/*
+			 * Only locking to serialise with a concurrent
+			 * set_pte_at() in the VMM but still overriding the
+			 * tags, hence ignoring the return value.
+			 */
+			try_page_mte_tagging(page);
 			num_tags = mte_copy_tags_from_user(maddr, tags,
 							MTE_GRANULES_PER_PAGE);
 
-			/*
-			 * Set the flag after checking the write
-			 * completed fully
-			 */
-			if (num_tags == MTE_GRANULES_PER_PAGE)
-				set_page_mte_tagged(page);
+			/* uaccess failed, don't leave stale tags */
+			if (num_tags != MTE_GRANULES_PER_PAGE)
+				mte_clear_page_tags(page);
+			set_page_mte_tagged(page);
 
 			kvm_release_pfn_dirty(pfn);
 		}
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 1a3707aeb41f..750a69a97994 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1066,7 +1066,7 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
 		return;
 
 	for (i = 0; i < nr_pages; i++, page++) {
-		if (!page_mte_tagged(page)) {
+		if (try_page_mte_tagging(page)) {
 			mte_clear_page_tags(page_address(page));
 			set_page_mte_tagged(page);
 		}
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index 4223389b6180..a3fa650ceca4 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -22,6 +22,8 @@ void copy_highpage(struct page *to, struct page *from)
 	copy_page(kto, kfrom);
 
 	if (system_supports_mte() && page_mte_tagged(from)) {
+		/* It's a new page, shouldn't have been tagged yet */
+		WARN_ON_ONCE(!try_page_mte_tagging(to));
 		mte_copy_page_tags(kto, kfrom);
 		set_page_mte_tagged(to);
 	}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index d095bfa16771..6407a29cab0d 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -930,6 +930,8 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
 
 void tag_clear_highpage(struct page *page)
 {
+	/* Newly allocated page, shouldn't have been tagged yet */
+	WARN_ON_ONCE(!try_page_mte_tagging(page));
 	mte_zero_clear_page_tags(page_address(page));
 	set_page_mte_tagged(page);
 }
diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
index a78c1db23c68..cd5ad0936e16 100644
--- a/arch/arm64/mm/mteswap.c
+++ b/arch/arm64/mm/mteswap.c
@@ -53,6 +53,9 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page)
 	if (!tags)
 		return false;
 
+	/* racing tag restoring? */
+	if (!try_page_mte_tagging(page))
+		return false;
 	mte_restore_page_tags(page_address(page), tags);
 
 	return true;
-- 
2.37.1.559.g78731f0fdb-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 5/7] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled
  2022-08-10 19:30 [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Peter Collingbourne
                   ` (3 preceding siblings ...)
  2022-08-10 19:30 ` [PATCH v3 4/7] arm64: mte: Lock a page for MTE tag initialisation Peter Collingbourne
@ 2022-08-10 19:30 ` Peter Collingbourne
  2022-09-02 13:41   ` Catalin Marinas
  2022-09-02 14:47   ` Steven Price
  2022-08-10 19:30 ` [PATCH v3 6/7] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled Peter Collingbourne
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 32+ messages in thread
From: Peter Collingbourne @ 2022-08-10 19:30 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: Peter Collingbourne, Cornelia Huck, Catalin Marinas, Will Deacon,
	Marc Zyngier, Evgenii Stepanov, kvm, Steven Price,
	Vincenzo Frascino

Previously we allowed creating a memslot containing a private mapping that
was not VM_MTE_ALLOWED, but would later reject KVM_RUN with -EFAULT. Now
we reject the memory region at memslot creation time.

Since this is a minor tweak to the ABI (a VMM that created one of
these memslots would fail later anyway), no VMM to my knowledge has
MTE support yet, and the hardware with the necessary features is not
generally available, we can probably make this ABI change at this point.

Signed-off-by: Peter Collingbourne <pcc@google.com>
---
 arch/arm64/kvm/mmu.c | 25 ++++++++++++++++---------
 1 file changed, 16 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 750a69a97994..d54be80e31dd 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1073,6 +1073,19 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
 	}
 }
 
+static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
+{
+	/*
+	 * VM_SHARED mappings are not allowed with MTE to avoid races
+	 * when updating the PG_mte_tagged page flag, see
+	 * sanitise_mte_tags for more details.
+	 */
+	if (vma->vm_flags & VM_SHARED)
+		return false;
+
+	return vma->vm_flags & VM_MTE_ALLOWED;
+}
+
 static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 			  struct kvm_memory_slot *memslot, unsigned long hva,
 			  unsigned long fault_status)
@@ -1249,9 +1262,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	}
 
 	if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) {
-		/* Check the VMM hasn't introduced a new VM_SHARED VMA */
-		if ((vma->vm_flags & VM_MTE_ALLOWED) &&
-		    !(vma->vm_flags & VM_SHARED)) {
+		/* Check the VMM hasn't introduced a new disallowed VMA */
+		if (kvm_vma_mte_allowed(vma)) {
 			sanitise_mte_tags(kvm, pfn, vma_pagesize);
 		} else {
 			ret = -EFAULT;
@@ -1695,12 +1707,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 		if (!vma)
 			break;
 
-		/*
-		 * VM_SHARED mappings are not allowed with MTE to avoid races
-		 * when updating the PG_mte_tagged page flag, see
-		 * sanitise_mte_tags for more details.
-		 */
-		if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) {
+		if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) {
 			ret = -EINVAL;
 			break;
 		}
-- 
2.37.1.559.g78731f0fdb-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 6/7] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled
  2022-08-10 19:30 [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Peter Collingbourne
                   ` (4 preceding siblings ...)
  2022-08-10 19:30 ` [PATCH v3 5/7] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled Peter Collingbourne
@ 2022-08-10 19:30 ` Peter Collingbourne
  2022-09-02 13:45   ` Catalin Marinas
  2022-09-12 16:23   ` Marc Zyngier
  2022-08-10 19:30 ` [PATCH v3 7/7] Documentation: document the ABI changes for KVM_CAP_ARM_MTE Peter Collingbourne
  2022-09-02 14:05 ` [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Catalin Marinas
  7 siblings, 2 replies; 32+ messages in thread
From: Peter Collingbourne @ 2022-08-10 19:30 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: Peter Collingbourne, Cornelia Huck, Catalin Marinas, Will Deacon,
	Marc Zyngier, Evgenii Stepanov, kvm, Steven Price,
	Vincenzo Frascino

Certain VMMs such as crosvm have features (e.g. sandboxing) that depend
on being able to map guest memory as MAP_SHARED. The current restriction
on sharing MAP_SHARED pages with the guest is preventing the use of
those features with MTE. Now that the races between tasks concurrently
clearing tags on the same page have been fixed, remove this restriction.

Signed-off-by: Peter Collingbourne <pcc@google.com>
---
 arch/arm64/kvm/mmu.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index d54be80e31dd..fc65dc20655d 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1075,14 +1075,6 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
 
 static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
 {
-	/*
-	 * VM_SHARED mappings are not allowed with MTE to avoid races
-	 * when updating the PG_mte_tagged page flag, see
-	 * sanitise_mte_tags for more details.
-	 */
-	if (vma->vm_flags & VM_SHARED)
-		return false;
-
 	return vma->vm_flags & VM_MTE_ALLOWED;
 }
 
-- 
2.37.1.559.g78731f0fdb-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 7/7] Documentation: document the ABI changes for KVM_CAP_ARM_MTE
  2022-08-10 19:30 [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Peter Collingbourne
                   ` (5 preceding siblings ...)
  2022-08-10 19:30 ` [PATCH v3 6/7] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled Peter Collingbourne
@ 2022-08-10 19:30 ` Peter Collingbourne
  2022-09-02 13:49   ` Catalin Marinas
  2022-09-02 14:05 ` [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Catalin Marinas
  7 siblings, 1 reply; 32+ messages in thread
From: Peter Collingbourne @ 2022-08-10 19:30 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: Peter Collingbourne, Cornelia Huck, Catalin Marinas, Will Deacon,
	Marc Zyngier, Evgenii Stepanov, kvm, Steven Price,
	Vincenzo Frascino

Document both the restriction on VM_MTE_ALLOWED mappings and
the relaxation for shared mappings.

Signed-off-by: Peter Collingbourne <pcc@google.com>
---
 Documentation/virt/kvm/api.rst | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 9788b19f9ff7..30e0c35828ef 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -7486,8 +7486,9 @@ hibernation of the host; however the VMM needs to manually save/restore the
 tags as appropriate if the VM is migrated.
 
 When this capability is enabled all memory in memslots must be mapped as
-not-shareable (no MAP_SHARED), attempts to create a memslot with a
-MAP_SHARED mmap will result in an -EINVAL return.
+``MAP_ANONYMOUS`` or with a RAM-based file mapping (``tmpfs``, ``memfd``),
+attempts to create a memslot with an invalid mmap will result in an
+-EINVAL return.
 
 When enabled the VMM may make use of the ``KVM_ARM_MTE_COPY_TAGS`` ioctl to
 perform a bulk copy of tags to/from the guest.
-- 
2.37.1.559.g78731f0fdb-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 3/7] mm: Add PG_arch_3 page flag
  2022-08-10 19:30 ` [PATCH v3 3/7] mm: Add PG_arch_3 page flag Peter Collingbourne
@ 2022-08-11  7:16   ` kernel test robot
  2022-09-01 17:59     ` Catalin Marinas
  0 siblings, 1 reply; 32+ messages in thread
From: kernel test robot @ 2022-08-11  7:16 UTC (permalink / raw)
  To: Peter Collingbourne, linux-arm-kernel, kvmarm
  Cc: kbuild-all, Peter Collingbourne, Cornelia Huck, Catalin Marinas,
	Will Deacon, Marc Zyngier, Evgenii Stepanov, kvm, Steven Price,
	Vincenzo Frascino

Hi Peter,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on arm64/for-next/core]
[also build test WARNING on linus/master next-20220811]
[cannot apply to kvmarm/next arm/for-next soc/for-next xilinx-xlnx/master v5.19]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Peter-Collingbourne/KVM-arm64-permit-MAP_SHARED-mappings-with-MTE-enabled/20220811-033310
base:   https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
config: loongarch-defconfig (https://download.01.org/0day-ci/archive/20220811/202208111500.62e0Bl2l-lkp@intel.com/config)
compiler: loongarch64-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/1a400517d8428df0ec9f86f8d303b2227ee9702f
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Peter-Collingbourne/KVM-arm64-permit-MAP_SHARED-mappings-with-MTE-enabled/20220811-033310
        git checkout 1a400517d8428df0ec9f86f8d303b2227ee9702f
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> mm/memory.c:92:2: warning: #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid. [-Wcpp]
      92 | #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
         |  ^~~~~~~


vim +92 mm/memory.c

42b7772812d15b Jan Beulich    2008-07-23  90  
af27d9403f5b80 Arnd Bergmann  2018-02-16  91  #if defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS) && !defined(CONFIG_COMPILE_TEST)
90572890d20252 Peter Zijlstra 2013-10-07 @92  #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
75980e97daccfc Peter Zijlstra 2013-02-22  93  #endif
75980e97daccfc Peter Zijlstra 2013-02-22  94  

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics
  2022-08-10 19:30 ` [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics Peter Collingbourne
@ 2022-09-01 15:49   ` Catalin Marinas
  2022-09-02 10:26   ` Cornelia Huck
  2022-09-02 14:47   ` Steven Price
  2 siblings, 0 replies; 32+ messages in thread
From: Catalin Marinas @ 2022-09-01 15:49 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: linux-arm-kernel, kvmarm, Cornelia Huck, Will Deacon,
	Marc Zyngier, Evgenii Stepanov, kvm, Steven Price,
	Vincenzo Frascino

On Wed, Aug 10, 2022 at 12:30:27PM -0700, Peter Collingbourne wrote:
> From: Catalin Marinas <catalin.marinas@arm.com>
> 
> Currently the PG_mte_tagged page flag mostly means the page contains
> valid tags and it should be set after the tags have been cleared or
> restored. However, in mte_sync_tags() it is set before setting the tags
> to avoid, in theory, a race with concurrent mprotect(PROT_MTE) for
> shared pages. However, a concurrent mprotect(PROT_MTE) with a copy on
> write in another thread can cause the new page to have stale tags.
> Similarly, tag reading via ptrace() can read stale tags of the
> PG_mte_tagged flag is set before actually clearing/restoring the tags.
> 
> Fix the PG_mte_tagged semantics so that it is only set after the tags
> have been cleared or restored. This is safe for swap restoring into a
> MAP_SHARED or CoW page since the core code takes the page lock. Add two
> functions to test and set the PG_mte_tagged flag with acquire and
> release semantics. The downside is that concurrent mprotect(PROT_MTE) on
> a MAP_SHARED page may cause tag loss. This is already the case for KVM
> guests if a VMM changes the page protection while the guest triggers a
> user_mem_abort().
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Peter Collingbourne <pcc@google.com>
> ---
> v3:
> - fix build with CONFIG_ARM64_MTE disabled

When you post someone else's patches (thanks for updating them BTW),
please add your Signed-off-by line. You should also add a note in the
SoB block about the changes you made, so something like:

[pcc@google.com: fix build with CONFIG_ARM64_MTE disabled]
Singed-off-by: your name/address

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 3/7] mm: Add PG_arch_3 page flag
  2022-08-11  7:16   ` kernel test robot
@ 2022-09-01 17:59     ` Catalin Marinas
  2022-09-05 17:01       ` Catalin Marinas
  0 siblings, 1 reply; 32+ messages in thread
From: Catalin Marinas @ 2022-09-01 17:59 UTC (permalink / raw)
  To: kernel test robot
  Cc: Peter Collingbourne, linux-arm-kernel, kvmarm, kbuild-all,
	Cornelia Huck, Will Deacon, Marc Zyngier, Evgenii Stepanov, kvm,
	Steven Price, Vincenzo Frascino

On Thu, Aug 11, 2022 at 03:16:08PM +0800, kernel test robot wrote:
> Thank you for the patch! Perhaps something to improve:
> 
> [auto build test WARNING on arm64/for-next/core]
> [also build test WARNING on linus/master next-20220811]
> [cannot apply to kvmarm/next arm/for-next soc/for-next xilinx-xlnx/master v5.19]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
> 
> url:    https://github.com/intel-lab-lkp/linux/commits/Peter-Collingbourne/KVM-arm64-permit-MAP_SHARED-mappings-with-MTE-enabled/20220811-033310
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
> config: loongarch-defconfig (https://download.01.org/0day-ci/archive/20220811/202208111500.62e0Bl2l-lkp@intel.com/config)
> compiler: loongarch64-linux-gcc (GCC) 12.1.0
> reproduce (this is a W=1 build):
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # https://github.com/intel-lab-lkp/linux/commit/1a400517d8428df0ec9f86f8d303b2227ee9702f
>         git remote add linux-review https://github.com/intel-lab-lkp/linux
>         git fetch --no-tags linux-review Peter-Collingbourne/KVM-arm64-permit-MAP_SHARED-mappings-with-MTE-enabled/20220811-033310
>         git checkout 1a400517d8428df0ec9f86f8d303b2227ee9702f
>         # save the config file
>         mkdir build_dir && cp config build_dir/.config
>         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch SHELL=/bin/bash
> 
> If you fix the issue, kindly add following tag where applicable
> Reported-by: kernel test robot <lkp@intel.com>
> 
> All warnings (new ones prefixed by >>):
> 
> >> mm/memory.c:92:2: warning: #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid. [-Wcpp]
>       92 | #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
>          |  ^~~~~~~
> 
> 
> vim +92 mm/memory.c
> 
> 42b7772812d15b Jan Beulich    2008-07-23  90  
> af27d9403f5b80 Arnd Bergmann  2018-02-16  91  #if defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS) && !defined(CONFIG_COMPILE_TEST)
> 90572890d20252 Peter Zijlstra 2013-10-07 @92  #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
> 75980e97daccfc Peter Zijlstra 2013-02-22  93  #endif
> 75980e97daccfc Peter Zijlstra 2013-02-22  94  

It looks like ith CONFIG_NUMA_BALANCING=y on loongarch we run out of
spare bits in page->flags to fit last_cpupid. The reason we don't see it
on arm64 is that we select SPARSEMEM_VMEMMAP and SECTIONS_WIDTH becomes
0. On loongarch SECTIONS_WIDTH takes 19 bits (48 - 29) in page->flags.

I think instead of always defining PG_arch_{2,3} if CONFIG_64BIT, we
could add a CONFIG_ARCH_WANTS_PG_ARCH_23 option and only select it on
arm64 for the time being.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics
  2022-08-10 19:30 ` [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics Peter Collingbourne
  2022-09-01 15:49   ` Catalin Marinas
@ 2022-09-02 10:26   ` Cornelia Huck
  2022-09-02 14:47   ` Steven Price
  2 siblings, 0 replies; 32+ messages in thread
From: Cornelia Huck @ 2022-09-02 10:26 UTC (permalink / raw)
  To: Peter Collingbourne, linux-arm-kernel, kvmarm
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Evgenii Stepanov,
	kvm, Steven Price, Vincenzo Frascino, Peter Collingbourne

On Wed, Aug 10 2022, Peter Collingbourne <pcc@google.com> wrote:

> From: Catalin Marinas <catalin.marinas@arm.com>
>
> Currently the PG_mte_tagged page flag mostly means the page contains
> valid tags and it should be set after the tags have been cleared or
> restored. However, in mte_sync_tags() it is set before setting the tags
> to avoid, in theory, a race with concurrent mprotect(PROT_MTE) for
> shared pages. However, a concurrent mprotect(PROT_MTE) with a copy on
> write in another thread can cause the new page to have stale tags.
> Similarly, tag reading via ptrace() can read stale tags of the

s/of/if/

> PG_mte_tagged flag is set before actually clearing/restoring the tags.
>
> Fix the PG_mte_tagged semantics so that it is only set after the tags
> have been cleared or restored. This is safe for swap restoring into a
> MAP_SHARED or CoW page since the core code takes the page lock. Add two
> functions to test and set the PG_mte_tagged flag with acquire and
> release semantics. The downside is that concurrent mprotect(PROT_MTE) on
> a MAP_SHARED page may cause tag loss. This is already the case for KVM
> guests if a VMM changes the page protection while the guest triggers a
> user_mem_abort().
>
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Peter Collingbourne <pcc@google.com>
> ---
> v3:
> - fix build with CONFIG_ARM64_MTE disabled
>
>  arch/arm64/include/asm/mte.h     | 30 ++++++++++++++++++++++++++++++
>  arch/arm64/include/asm/pgtable.h |  2 +-
>  arch/arm64/kernel/cpufeature.c   |  4 +++-
>  arch/arm64/kernel/elfcore.c      |  2 +-
>  arch/arm64/kernel/hibernate.c    |  2 +-
>  arch/arm64/kernel/mte.c          | 12 +++++++-----
>  arch/arm64/kvm/guest.c           |  4 ++--
>  arch/arm64/kvm/mmu.c             |  4 ++--
>  arch/arm64/mm/copypage.c         |  4 ++--
>  arch/arm64/mm/fault.c            |  2 +-
>  arch/arm64/mm/mteswap.c          |  2 +-
>  11 files changed, 51 insertions(+), 17 deletions(-)

Reviewed-by: Cornelia Huck <cohuck@redhat.com>


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 5/7] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled
  2022-08-10 19:30 ` [PATCH v3 5/7] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled Peter Collingbourne
@ 2022-09-02 13:41   ` Catalin Marinas
  2022-09-02 14:47   ` Steven Price
  1 sibling, 0 replies; 32+ messages in thread
From: Catalin Marinas @ 2022-09-02 13:41 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: linux-arm-kernel, kvmarm, Cornelia Huck, Will Deacon,
	Marc Zyngier, Evgenii Stepanov, kvm, Steven Price,
	Vincenzo Frascino

On Wed, Aug 10, 2022 at 12:30:31PM -0700, Peter Collingbourne wrote:
> Previously we allowed creating a memslot containing a private mapping that
> was not VM_MTE_ALLOWED, but would later reject KVM_RUN with -EFAULT. Now
> we reject the memory region at memslot creation time.
> 
> Since this is a minor tweak to the ABI (a VMM that created one of
> these memslots would fail later anyway), no VMM to my knowledge has
> MTE support yet, and the hardware with the necessary features is not
> generally available, we can probably make this ABI change at this point.

I don't think that's a noticeable ABI change. As you said, such VMs
would fail later anyway when trying to access such page.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 6/7] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled
  2022-08-10 19:30 ` [PATCH v3 6/7] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled Peter Collingbourne
@ 2022-09-02 13:45   ` Catalin Marinas
  2022-09-02 14:47     ` Steven Price
  2022-09-12 16:23   ` Marc Zyngier
  1 sibling, 1 reply; 32+ messages in thread
From: Catalin Marinas @ 2022-09-02 13:45 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: linux-arm-kernel, kvmarm, Cornelia Huck, Will Deacon,
	Marc Zyngier, Evgenii Stepanov, kvm, Steven Price,
	Vincenzo Frascino

On Wed, Aug 10, 2022 at 12:30:32PM -0700, Peter Collingbourne wrote:
> Certain VMMs such as crosvm have features (e.g. sandboxing) that depend
> on being able to map guest memory as MAP_SHARED. The current restriction
> on sharing MAP_SHARED pages with the guest is preventing the use of
> those features with MTE. Now that the races between tasks concurrently
> clearing tags on the same page have been fixed, remove this restriction.
> 
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> ---
>  arch/arm64/kvm/mmu.c | 8 --------
>  1 file changed, 8 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index d54be80e31dd..fc65dc20655d 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1075,14 +1075,6 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
>  
>  static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
>  {
> -	/*
> -	 * VM_SHARED mappings are not allowed with MTE to avoid races
> -	 * when updating the PG_mte_tagged page flag, see
> -	 * sanitise_mte_tags for more details.
> -	 */
> -	if (vma->vm_flags & VM_SHARED)
> -		return false;

I think this is fine with the locking in place (BTW, it may be worth
mentioning in the commit message that it's a relaxation of the ABI). I'd
like Steven to have a look as well when he gets the time, in case we
missed anything on the KVM+MTE side.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 7/7] Documentation: document the ABI changes for KVM_CAP_ARM_MTE
  2022-08-10 19:30 ` [PATCH v3 7/7] Documentation: document the ABI changes for KVM_CAP_ARM_MTE Peter Collingbourne
@ 2022-09-02 13:49   ` Catalin Marinas
  0 siblings, 0 replies; 32+ messages in thread
From: Catalin Marinas @ 2022-09-02 13:49 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: linux-arm-kernel, kvmarm, Cornelia Huck, Will Deacon,
	Marc Zyngier, Evgenii Stepanov, kvm, Steven Price,
	Vincenzo Frascino

On Wed, Aug 10, 2022 at 12:30:33PM -0700, Peter Collingbourne wrote:
> Document both the restriction on VM_MTE_ALLOWED mappings and
> the relaxation for shared mappings.
> 
> Signed-off-by: Peter Collingbourne <pcc@google.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled
  2022-08-10 19:30 [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Peter Collingbourne
                   ` (6 preceding siblings ...)
  2022-08-10 19:30 ` [PATCH v3 7/7] Documentation: document the ABI changes for KVM_CAP_ARM_MTE Peter Collingbourne
@ 2022-09-02 14:05 ` Catalin Marinas
  7 siblings, 0 replies; 32+ messages in thread
From: Catalin Marinas @ 2022-09-02 14:05 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: linux-arm-kernel, kvmarm, Cornelia Huck, Will Deacon,
	Marc Zyngier, Evgenii Stepanov, kvm, Steven Price,
	Vincenzo Frascino

On Wed, Aug 10, 2022 at 12:30:26PM -0700, Peter Collingbourne wrote:
> I rebased Catalin's series onto -next, addressed the issues that I
> identified in the review and added the proposed userspace enablement
> patches after the series.
> 
> [1] https://lore.kernel.org/all/20220705142619.4135905-1-catalin.marinas@arm.com/
> 
> Catalin Marinas (3):
>   arm64: mte: Fix/clarify the PG_mte_tagged semantics
>   KVM: arm64: Simplify the sanitise_mte_tags() logic
>   arm64: mte: Lock a page for MTE tag initialisation
> 
> Peter Collingbourne (4):
>   mm: Add PG_arch_3 page flag

BTW, I rebased these for patches on top of 6.0-rc3 and hopefully
addressed the review comments. I pushed them here:

git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux devel/mte-pg-flags

You may have rebased them already but just in case you haven't, feel
free to pick them up from the above branch.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics
  2022-08-10 19:30 ` [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics Peter Collingbourne
  2022-09-01 15:49   ` Catalin Marinas
  2022-09-02 10:26   ` Cornelia Huck
@ 2022-09-02 14:47   ` Steven Price
  2 siblings, 0 replies; 32+ messages in thread
From: Steven Price @ 2022-09-02 14:47 UTC (permalink / raw)
  To: Peter Collingbourne, linux-arm-kernel, kvmarm
  Cc: Catalin Marinas, Cornelia Huck, Will Deacon, Marc Zyngier,
	Evgenii Stepanov, kvm, Vincenzo Frascino

On 10/08/2022 20:30, Peter Collingbourne wrote:
> From: Catalin Marinas <catalin.marinas@arm.com>
> 
> Currently the PG_mte_tagged page flag mostly means the page contains
> valid tags and it should be set after the tags have been cleared or
> restored. However, in mte_sync_tags() it is set before setting the tags
> to avoid, in theory, a race with concurrent mprotect(PROT_MTE) for
> shared pages. However, a concurrent mprotect(PROT_MTE) with a copy on
> write in another thread can cause the new page to have stale tags.
> Similarly, tag reading via ptrace() can read stale tags of the
> PG_mte_tagged flag is set before actually clearing/restoring the tags.
> 
> Fix the PG_mte_tagged semantics so that it is only set after the tags
> have been cleared or restored. This is safe for swap restoring into a
> MAP_SHARED or CoW page since the core code takes the page lock. Add two
> functions to test and set the PG_mte_tagged flag with acquire and
> release semantics. The downside is that concurrent mprotect(PROT_MTE) on
> a MAP_SHARED page may cause tag loss. This is already the case for KVM
> guests if a VMM changes the page protection while the guest triggers a
> user_mem_abort().
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Peter Collingbourne <pcc@google.com>

Reviewed-by: Steven Price <steven.price@arm.com>

> ---
> v3:
> - fix build with CONFIG_ARM64_MTE disabled
> 
>  arch/arm64/include/asm/mte.h     | 30 ++++++++++++++++++++++++++++++
>  arch/arm64/include/asm/pgtable.h |  2 +-
>  arch/arm64/kernel/cpufeature.c   |  4 +++-
>  arch/arm64/kernel/elfcore.c      |  2 +-
>  arch/arm64/kernel/hibernate.c    |  2 +-
>  arch/arm64/kernel/mte.c          | 12 +++++++-----
>  arch/arm64/kvm/guest.c           |  4 ++--
>  arch/arm64/kvm/mmu.c             |  4 ++--
>  arch/arm64/mm/copypage.c         |  4 ++--
>  arch/arm64/mm/fault.c            |  2 +-
>  arch/arm64/mm/mteswap.c          |  2 +-
>  11 files changed, 51 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
> index aa523591a44e..46618c575eac 100644
> --- a/arch/arm64/include/asm/mte.h
> +++ b/arch/arm64/include/asm/mte.h
> @@ -37,6 +37,29 @@ void mte_free_tag_storage(char *storage);
>  /* track which pages have valid allocation tags */
>  #define PG_mte_tagged	PG_arch_2
>  
> +static inline void set_page_mte_tagged(struct page *page)
> +{
> +	/*
> +	 * Ensure that the tags written prior to this function are visible
> +	 * before the page flags update.
> +	 */
> +	smp_wmb();
> +	set_bit(PG_mte_tagged, &page->flags);
> +}
> +
> +static inline bool page_mte_tagged(struct page *page)
> +{
> +	bool ret = test_bit(PG_mte_tagged, &page->flags);
> +
> +	/*
> +	 * If the page is tagged, ensure ordering with a likely subsequent
> +	 * read of the tags.
> +	 */
> +	if (ret)
> +		smp_rmb();
> +	return ret;
> +}
> +
>  void mte_zero_clear_page_tags(void *addr);
>  void mte_sync_tags(pte_t old_pte, pte_t pte);
>  void mte_copy_page_tags(void *kto, const void *kfrom);
> @@ -54,6 +77,13 @@ size_t mte_probe_user_range(const char __user *uaddr, size_t size);
>  /* unused if !CONFIG_ARM64_MTE, silence the compiler */
>  #define PG_mte_tagged	0
>  
> +static inline void set_page_mte_tagged(struct page *page)
> +{
> +}
> +static inline bool page_mte_tagged(struct page *page)
> +{
> +	return false;
> +}
>  static inline void mte_zero_clear_page_tags(void *addr)
>  {
>  }
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index b5df82aa99e6..82719fa42c0e 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -1050,7 +1050,7 @@ static inline void arch_swap_invalidate_area(int type)
>  static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio)
>  {
>  	if (system_supports_mte() && mte_restore_tags(entry, &folio->page))
> -		set_bit(PG_mte_tagged, &folio->flags);
> +		set_page_mte_tagged(&folio->page);
>  }
>  
>  #endif /* CONFIG_ARM64_MTE */
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 907401e4fffb..562c301bbf15 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2037,8 +2037,10 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
>  	 * Clear the tags in the zero page. This needs to be done via the
>  	 * linear map which has the Tagged attribute.
>  	 */
> -	if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags))
> +	if (!page_mte_tagged(ZERO_PAGE(0))) {
>  		mte_clear_page_tags(lm_alias(empty_zero_page));
> +		set_page_mte_tagged(ZERO_PAGE(0));
> +	}
>  
>  	kasan_init_hw_tags_cpu();
>  }
> diff --git a/arch/arm64/kernel/elfcore.c b/arch/arm64/kernel/elfcore.c
> index 98d67444a5b6..f91bb1572d22 100644
> --- a/arch/arm64/kernel/elfcore.c
> +++ b/arch/arm64/kernel/elfcore.c
> @@ -47,7 +47,7 @@ static int mte_dump_tag_range(struct coredump_params *cprm,
>  		 * Pages mapped in user space as !pte_access_permitted() (e.g.
>  		 * PROT_EXEC only) may not have the PG_mte_tagged flag set.
>  		 */
> -		if (!test_bit(PG_mte_tagged, &page->flags)) {
> +		if (!page_mte_tagged(page)) {
>  			put_page(page);
>  			dump_skip(cprm, MTE_PAGE_TAG_STORAGE);
>  			continue;
> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> index af5df48ba915..788597a6b6a2 100644
> --- a/arch/arm64/kernel/hibernate.c
> +++ b/arch/arm64/kernel/hibernate.c
> @@ -271,7 +271,7 @@ static int swsusp_mte_save_tags(void)
>  			if (!page)
>  				continue;
>  
> -			if (!test_bit(PG_mte_tagged, &page->flags))
> +			if (!page_mte_tagged(page))
>  				continue;
>  
>  			ret = save_tags(page, pfn);
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index b2b730233274..2287316639f3 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -41,14 +41,17 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
>  	if (check_swap && is_swap_pte(old_pte)) {
>  		swp_entry_t entry = pte_to_swp_entry(old_pte);
>  
> -		if (!non_swap_entry(entry) && mte_restore_tags(entry, page))
> +		if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) {
> +			set_page_mte_tagged(page);
>  			return;
> +		}
>  	}
>  
>  	if (!pte_is_tagged)
>  		return;
>  
>  	mte_clear_page_tags(page_address(page));
> +	set_page_mte_tagged(page);
>  }
>  
>  void mte_sync_tags(pte_t old_pte, pte_t pte)
> @@ -64,7 +67,7 @@ void mte_sync_tags(pte_t old_pte, pte_t pte)
>  
>  	/* if PG_mte_tagged is set, tags have already been initialised */
>  	for (i = 0; i < nr_pages; i++, page++) {
> -		if (!test_and_set_bit(PG_mte_tagged, &page->flags))
> +		if (!page_mte_tagged(page))
>  			mte_sync_page_tags(page, old_pte, check_swap,
>  					   pte_is_tagged);
>  	}
> @@ -91,8 +94,7 @@ int memcmp_pages(struct page *page1, struct page *page2)
>  	 * pages is tagged, set_pte_at() may zero or change the tags of the
>  	 * other page via mte_sync_tags().
>  	 */
> -	if (test_bit(PG_mte_tagged, &page1->flags) ||
> -	    test_bit(PG_mte_tagged, &page2->flags))
> +	if (page_mte_tagged(page1) || page_mte_tagged(page2))
>  		return addr1 != addr2;
>  
>  	return ret;
> @@ -398,7 +400,7 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
>  			put_page(page);
>  			break;
>  		}
> -		WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags));
> +		WARN_ON_ONCE(!page_mte_tagged(page));
>  
>  		/* limit access to the end of the page */
>  		offset = offset_in_page(addr);
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 8c607199cad1..3b04e69006b4 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -1058,7 +1058,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
>  		maddr = page_address(page);
>  
>  		if (!write) {
> -			if (test_bit(PG_mte_tagged, &page->flags))
> +			if (page_mte_tagged(page))
>  				num_tags = mte_copy_tags_to_user(tags, maddr,
>  							MTE_GRANULES_PER_PAGE);
>  			else
> @@ -1075,7 +1075,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
>  			 * completed fully
>  			 */
>  			if (num_tags == MTE_GRANULES_PER_PAGE)
> -				set_bit(PG_mte_tagged, &page->flags);
> +				set_page_mte_tagged(page);
>  
>  			kvm_release_pfn_dirty(pfn);
>  		}
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 87f1cd0df36e..c9012707f69c 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1075,9 +1075,9 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
>  		return -EFAULT;
>  
>  	for (i = 0; i < nr_pages; i++, page++) {
> -		if (!test_bit(PG_mte_tagged, &page->flags)) {
> +		if (!page_mte_tagged(page)) {
>  			mte_clear_page_tags(page_address(page));
> -			set_bit(PG_mte_tagged, &page->flags);
> +			set_page_mte_tagged(page);
>  		}
>  	}
>  
> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
> index 24913271e898..4223389b6180 100644
> --- a/arch/arm64/mm/copypage.c
> +++ b/arch/arm64/mm/copypage.c
> @@ -21,9 +21,9 @@ void copy_highpage(struct page *to, struct page *from)
>  
>  	copy_page(kto, kfrom);
>  
> -	if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
> -		set_bit(PG_mte_tagged, &to->flags);
> +	if (system_supports_mte() && page_mte_tagged(from)) {
>  		mte_copy_page_tags(kto, kfrom);
> +		set_page_mte_tagged(to);
>  	}
>  }
>  EXPORT_SYMBOL(copy_highpage);
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index c33f1fad2745..d095bfa16771 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -931,5 +931,5 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
>  void tag_clear_highpage(struct page *page)
>  {
>  	mte_zero_clear_page_tags(page_address(page));
> -	set_bit(PG_mte_tagged, &page->flags);
> +	set_page_mte_tagged(page);
>  }
> diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
> index 4334dec93bd4..a78c1db23c68 100644
> --- a/arch/arm64/mm/mteswap.c
> +++ b/arch/arm64/mm/mteswap.c
> @@ -24,7 +24,7 @@ int mte_save_tags(struct page *page)
>  {
>  	void *tag_storage, *ret;
>  
> -	if (!test_bit(PG_mte_tagged, &page->flags))
> +	if (!page_mte_tagged(page))
>  		return 0;
>  
>  	tag_storage = mte_allocate_tag_storage();


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 2/7] KVM: arm64: Simplify the sanitise_mte_tags() logic
  2022-08-10 19:30 ` [PATCH v3 2/7] KVM: arm64: Simplify the sanitise_mte_tags() logic Peter Collingbourne
@ 2022-09-02 14:47   ` Steven Price
  0 siblings, 0 replies; 32+ messages in thread
From: Steven Price @ 2022-09-02 14:47 UTC (permalink / raw)
  To: Peter Collingbourne, linux-arm-kernel, kvmarm
  Cc: Catalin Marinas, Cornelia Huck, Will Deacon, Marc Zyngier,
	Evgenii Stepanov, kvm, Vincenzo Frascino

On 10/08/2022 20:30, Peter Collingbourne wrote:
> From: Catalin Marinas <catalin.marinas@arm.com>
> 
> Currently sanitise_mte_tags() checks if it's an online page before
> attempting to sanitise the tags. Such detection should be done in the
> caller via the VM_MTE_ALLOWED vma flag. Since kvm_set_spte_gfn() does
> not have the vma, leave the page unmapped if not already tagged. Tag
> initialisation will be done on a subsequent access fault in
> user_mem_abort().

Looks correct to me.

Reviewed-by: Steven Price <steven.price@arm.com>

> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Peter Collingbourne <pcc@google.com>
> ---
>  arch/arm64/kvm/mmu.c | 40 +++++++++++++++-------------------------
>  1 file changed, 15 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index c9012707f69c..1a3707aeb41f 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1056,23 +1056,14 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
>   * - mmap_lock protects between a VM faulting a page in and the VMM performing
>   *   an mprotect() to add VM_MTE
>   */
> -static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> -			     unsigned long size)
> +static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> +			      unsigned long size)
>  {
>  	unsigned long i, nr_pages = size >> PAGE_SHIFT;
> -	struct page *page;
> +	struct page *page = pfn_to_page(pfn);
>  
>  	if (!kvm_has_mte(kvm))
> -		return 0;
> -
> -	/*
> -	 * pfn_to_online_page() is used to reject ZONE_DEVICE pages
> -	 * that may not support tags.
> -	 */
> -	page = pfn_to_online_page(pfn);
> -
> -	if (!page)
> -		return -EFAULT;
> +		return;
>  
>  	for (i = 0; i < nr_pages; i++, page++) {
>  		if (!page_mte_tagged(page)) {
> @@ -1080,8 +1071,6 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
>  			set_page_mte_tagged(page);
>  		}
>  	}
> -
> -	return 0;
>  }
>  
>  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> @@ -1092,7 +1081,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	bool write_fault, writable, force_pte = false;
>  	bool exec_fault;
>  	bool device = false;
> -	bool shared;
>  	unsigned long mmu_seq;
>  	struct kvm *kvm = vcpu->kvm;
>  	struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
> @@ -1142,8 +1130,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		vma_shift = get_vma_page_shift(vma, hva);
>  	}
>  
> -	shared = (vma->vm_flags & VM_SHARED);
> -
>  	switch (vma_shift) {
>  #ifndef __PAGETABLE_PMD_FOLDED
>  	case PUD_SHIFT:
> @@ -1264,12 +1250,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  
>  	if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) {
>  		/* Check the VMM hasn't introduced a new VM_SHARED VMA */
> -		if (!shared)
> -			ret = sanitise_mte_tags(kvm, pfn, vma_pagesize);
> -		else
> +		if ((vma->vm_flags & VM_MTE_ALLOWED) &&
> +		    !(vma->vm_flags & VM_SHARED)) {
> +			sanitise_mte_tags(kvm, pfn, vma_pagesize);
> +		} else {
>  			ret = -EFAULT;
> -		if (ret)
>  			goto out_unlock;
> +		}
>  	}
>  
>  	if (writable)
> @@ -1491,15 +1478,18 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
>  bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
>  {
>  	kvm_pfn_t pfn = pte_pfn(range->pte);
> -	int ret;
>  
>  	if (!kvm->arch.mmu.pgt)
>  		return false;
>  
>  	WARN_ON(range->end - range->start != 1);
>  
> -	ret = sanitise_mte_tags(kvm, pfn, PAGE_SIZE);
> -	if (ret)
> +	/*
> +	 * If the page isn't tagged, defer to user_mem_abort() for sanitising
> +	 * the MTE tags. The S2 pte should have been unmapped by
> +	 * mmu_notifier_invalidate_range_end().
> +	 */
> +	if (kvm_has_mte(kvm) && !page_mte_tagged(pfn_to_page(pfn)))
>  		return false;
>  
>  	/*


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 4/7] arm64: mte: Lock a page for MTE tag initialisation
  2022-08-10 19:30 ` [PATCH v3 4/7] arm64: mte: Lock a page for MTE tag initialisation Peter Collingbourne
@ 2022-09-02 14:47   ` Steven Price
  2022-09-02 16:28     ` Catalin Marinas
  0 siblings, 1 reply; 32+ messages in thread
From: Steven Price @ 2022-09-02 14:47 UTC (permalink / raw)
  To: Peter Collingbourne, linux-arm-kernel, kvmarm
  Cc: Catalin Marinas, Cornelia Huck, Will Deacon, Marc Zyngier,
	Evgenii Stepanov, kvm, Vincenzo Frascino

On 10/08/2022 20:30, Peter Collingbourne wrote:
> From: Catalin Marinas <catalin.marinas@arm.com>
> 
> Initialising the tags and setting PG_mte_tagged flag for a page can race
> between multiple set_pte_at() on shared pages or setting the stage 2 pte
> via user_mem_abort(). Introduce a new PG_mte_lock flag as PG_arch_3 and
> set it before attempting page initialisation. Given that PG_mte_tagged
> is never cleared for a page, consider setting this flag to mean page
> unlocked and wait on this bit with acquire semantics if the page is
> locked:
> 
> - try_page_mte_tagging() - lock the page for tagging, return true if it
>   can be tagged, false if already tagged. No acquire semantics if it
>   returns true (PG_mte_tagged not set) as there is no serialisation with
>   a previous set_page_mte_tagged().
> 
> - set_page_mte_tagged() - set PG_mte_tagged with release semantics.
> 
> The two-bit locking is based on Peter Collingbourne's idea.
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Peter Collingbourne <pcc@google.com>
> ---
>  arch/arm64/include/asm/mte.h     | 32 ++++++++++++++++++++++++++++++++
>  arch/arm64/include/asm/pgtable.h |  1 +
>  arch/arm64/kernel/cpufeature.c   |  2 +-
>  arch/arm64/kernel/mte.c          |  7 +++++--
>  arch/arm64/kvm/guest.c           | 16 ++++++++++------
>  arch/arm64/kvm/mmu.c             |  2 +-
>  arch/arm64/mm/copypage.c         |  2 ++
>  arch/arm64/mm/fault.c            |  2 ++
>  arch/arm64/mm/mteswap.c          |  3 +++
>  9 files changed, 57 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
> index 46618c575eac..ea5158f6f6cb 100644
> --- a/arch/arm64/include/asm/mte.h
> +++ b/arch/arm64/include/asm/mte.h
> @@ -36,6 +36,8 @@ void mte_free_tag_storage(char *storage);
>  
>  /* track which pages have valid allocation tags */
>  #define PG_mte_tagged	PG_arch_2
> +/* simple lock to avoid multiple threads tagging the same page */
> +#define PG_mte_lock	PG_arch_3
>  
>  static inline void set_page_mte_tagged(struct page *page)
>  {
> @@ -60,6 +62,32 @@ static inline bool page_mte_tagged(struct page *page)
>  	return ret;
>  }
>  
> +/*
> + * Lock the page for tagging and return 'true' if the page can be tagged,
> + * 'false' if already tagged. PG_mte_tagged is never cleared and therefore the
> + * locking only happens once for page initialisation.
> + *
> + * The page MTE lock state:
> + *
> + *   Locked:	PG_mte_lock && !PG_mte_tagged
> + *   Unlocked:	!PG_mte_lock || PG_mte_tagged
> + *
> + * Acquire semantics only if the page is tagged (returning 'false').
> + */
> +static inline bool try_page_mte_tagging(struct page *page)
> +{
> +	if (!test_and_set_bit(PG_mte_lock, &page->flags))
> +		return true;
> +
> +	/*
> +	 * The tags are either being initialised or have already been initialised,
> +	 * wait for the PG_mte_tagged flag to be set.
> +	 */
> +	smp_cond_load_acquire(&page->flags, VAL & (1UL << PG_mte_tagged));
> +
> +	return false;
> +}
> +
>  void mte_zero_clear_page_tags(void *addr);
>  void mte_sync_tags(pte_t old_pte, pte_t pte);
>  void mte_copy_page_tags(void *kto, const void *kfrom);
> @@ -84,6 +112,10 @@ static inline bool page_mte_tagged(struct page *page)
>  {
>  	return false;
>  }
> +static inline bool try_page_mte_tagging(struct page *page)
> +{
> +	return false;
> +}
>  static inline void mte_zero_clear_page_tags(void *addr)
>  {
>  }
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 82719fa42c0e..e6b82ad1e9e6 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -1049,6 +1049,7 @@ static inline void arch_swap_invalidate_area(int type)
>  #define __HAVE_ARCH_SWAP_RESTORE
>  static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio)
>  {
> +	/* mte_restore_tags() takes the PG_mte_lock */
>  	if (system_supports_mte() && mte_restore_tags(entry, &folio->page))
>  		set_page_mte_tagged(&folio->page);
>  }
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 562c301bbf15..33d342ddef87 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2037,7 +2037,7 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
>  	 * Clear the tags in the zero page. This needs to be done via the
>  	 * linear map which has the Tagged attribute.
>  	 */
> -	if (!page_mte_tagged(ZERO_PAGE(0))) {
> +	if (try_page_mte_tagging(ZERO_PAGE(0))) {
>  		mte_clear_page_tags(lm_alias(empty_zero_page));
>  		set_page_mte_tagged(ZERO_PAGE(0));
>  	}
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 2287316639f3..634e089b5933 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -41,6 +41,7 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
>  	if (check_swap && is_swap_pte(old_pte)) {
>  		swp_entry_t entry = pte_to_swp_entry(old_pte);
>  
> +		/* mte_restore_tags() takes the PG_mte_lock */
>  		if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) {
>  			set_page_mte_tagged(page);
>  			return;
> @@ -50,8 +51,10 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
>  	if (!pte_is_tagged)
>  		return;
>  
> -	mte_clear_page_tags(page_address(page));
> -	set_page_mte_tagged(page);
> +	if (try_page_mte_tagging(page)) {
> +		mte_clear_page_tags(page_address(page));
> +		set_page_mte_tagged(page);
> +	}
>  }
>  
>  void mte_sync_tags(pte_t old_pte, pte_t pte)
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 3b04e69006b4..059b38e7a9e8 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -1067,15 +1067,19 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
>  					clear_user(tags, MTE_GRANULES_PER_PAGE);
>  			kvm_release_pfn_clean(pfn);
>  		} else {
> +			/*
> +			 * Only locking to serialise with a concurrent
> +			 * set_pte_at() in the VMM but still overriding the
> +			 * tags, hence ignoring the return value.
> +			 */
> +			try_page_mte_tagging(page);
>  			num_tags = mte_copy_tags_from_user(maddr, tags,
>  							MTE_GRANULES_PER_PAGE);
>  
> -			/*
> -			 * Set the flag after checking the write
> -			 * completed fully
> -			 */
> -			if (num_tags == MTE_GRANULES_PER_PAGE)
> -				set_page_mte_tagged(page);
> +			/* uaccess failed, don't leave stale tags */
> +			if (num_tags != MTE_GRANULES_PER_PAGE)
> +				mte_clear_page_tags(page);
> +			set_page_mte_tagged(page);
>  
>  			kvm_release_pfn_dirty(pfn);
>  		}
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 1a3707aeb41f..750a69a97994 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1066,7 +1066,7 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
>  		return;
>  
>  	for (i = 0; i < nr_pages; i++, page++) {
> -		if (!page_mte_tagged(page)) {
> +		if (try_page_mte_tagging(page)) {
>  			mte_clear_page_tags(page_address(page));
>  			set_page_mte_tagged(page);
>  		}
> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
> index 4223389b6180..a3fa650ceca4 100644
> --- a/arch/arm64/mm/copypage.c
> +++ b/arch/arm64/mm/copypage.c
> @@ -22,6 +22,8 @@ void copy_highpage(struct page *to, struct page *from)
>  	copy_page(kto, kfrom);
>  
>  	if (system_supports_mte() && page_mte_tagged(from)) {
> +		/* It's a new page, shouldn't have been tagged yet */
> +		WARN_ON_ONCE(!try_page_mte_tagging(to));
>  		mte_copy_page_tags(kto, kfrom);
>  		set_page_mte_tagged(to);
>  	}
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index d095bfa16771..6407a29cab0d 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -930,6 +930,8 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
>  
>  void tag_clear_highpage(struct page *page)
>  {
> +	/* Newly allocated page, shouldn't have been tagged yet */
> +	WARN_ON_ONCE(!try_page_mte_tagging(page));
>  	mte_zero_clear_page_tags(page_address(page));
>  	set_page_mte_tagged(page);
>  }
> diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
> index a78c1db23c68..cd5ad0936e16 100644
> --- a/arch/arm64/mm/mteswap.c
> +++ b/arch/arm64/mm/mteswap.c
> @@ -53,6 +53,9 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page)
>  	if (!tags)
>  		return false;
>  
> +	/* racing tag restoring? */
> +	if (!try_page_mte_tagging(page))
> +		return false;
>  	mte_restore_page_tags(page_address(page), tags);

I feel like adding a "set_page_mte_tagged(page);" in here would avoid
the need for the comments about mte_restore_tags() taking the lock.

>  	return true;

Other than that nit, looks good.

Reviewed-by: Steven Price <steven.price@arm.com>

Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 5/7] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled
  2022-08-10 19:30 ` [PATCH v3 5/7] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled Peter Collingbourne
  2022-09-02 13:41   ` Catalin Marinas
@ 2022-09-02 14:47   ` Steven Price
  1 sibling, 0 replies; 32+ messages in thread
From: Steven Price @ 2022-09-02 14:47 UTC (permalink / raw)
  To: Peter Collingbourne, linux-arm-kernel, kvmarm
  Cc: Cornelia Huck, Catalin Marinas, Will Deacon, Marc Zyngier,
	Evgenii Stepanov, kvm, Vincenzo Frascino

On 10/08/2022 20:30, Peter Collingbourne wrote:
> Previously we allowed creating a memslot containing a private mapping that
> was not VM_MTE_ALLOWED, but would later reject KVM_RUN with -EFAULT. Now
> we reject the memory region at memslot creation time.
> 
> Since this is a minor tweak to the ABI (a VMM that created one of
> these memslots would fail later anyway), no VMM to my knowledge has
> MTE support yet, and the hardware with the necessary features is not
> generally available, we can probably make this ABI change at this point.
> 
> Signed-off-by: Peter Collingbourne <pcc@google.com>

Reviewed-by: Steven Price <steven.price@arm.com>

> ---
>  arch/arm64/kvm/mmu.c | 25 ++++++++++++++++---------
>  1 file changed, 16 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 750a69a97994..d54be80e31dd 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1073,6 +1073,19 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
>  	}
>  }
>  
> +static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
> +{
> +	/*
> +	 * VM_SHARED mappings are not allowed with MTE to avoid races
> +	 * when updating the PG_mte_tagged page flag, see
> +	 * sanitise_mte_tags for more details.
> +	 */
> +	if (vma->vm_flags & VM_SHARED)
> +		return false;
> +
> +	return vma->vm_flags & VM_MTE_ALLOWED;
> +}
> +
>  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  			  struct kvm_memory_slot *memslot, unsigned long hva,
>  			  unsigned long fault_status)
> @@ -1249,9 +1262,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	}
>  
>  	if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) {
> -		/* Check the VMM hasn't introduced a new VM_SHARED VMA */
> -		if ((vma->vm_flags & VM_MTE_ALLOWED) &&
> -		    !(vma->vm_flags & VM_SHARED)) {
> +		/* Check the VMM hasn't introduced a new disallowed VMA */
> +		if (kvm_vma_mte_allowed(vma)) {
>  			sanitise_mte_tags(kvm, pfn, vma_pagesize);
>  		} else {
>  			ret = -EFAULT;
> @@ -1695,12 +1707,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>  		if (!vma)
>  			break;
>  
> -		/*
> -		 * VM_SHARED mappings are not allowed with MTE to avoid races
> -		 * when updating the PG_mte_tagged page flag, see
> -		 * sanitise_mte_tags for more details.
> -		 */
> -		if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) {
> +		if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) {
>  			ret = -EINVAL;
>  			break;
>  		}


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 6/7] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled
  2022-09-02 13:45   ` Catalin Marinas
@ 2022-09-02 14:47     ` Steven Price
  0 siblings, 0 replies; 32+ messages in thread
From: Steven Price @ 2022-09-02 14:47 UTC (permalink / raw)
  To: Catalin Marinas, Peter Collingbourne
  Cc: linux-arm-kernel, kvmarm, Cornelia Huck, Will Deacon,
	Marc Zyngier, Evgenii Stepanov, kvm, Vincenzo Frascino

On 02/09/2022 14:45, Catalin Marinas wrote:
> On Wed, Aug 10, 2022 at 12:30:32PM -0700, Peter Collingbourne wrote:
>> Certain VMMs such as crosvm have features (e.g. sandboxing) that depend
>> on being able to map guest memory as MAP_SHARED. The current restriction
>> on sharing MAP_SHARED pages with the guest is preventing the use of
>> those features with MTE. Now that the races between tasks concurrently
>> clearing tags on the same page have been fixed, remove this restriction.
>>
>> Signed-off-by: Peter Collingbourne <pcc@google.com>
>> ---
>>  arch/arm64/kvm/mmu.c | 8 --------
>>  1 file changed, 8 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index d54be80e31dd..fc65dc20655d 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -1075,14 +1075,6 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
>>  
>>  static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
>>  {
>> -	/*
>> -	 * VM_SHARED mappings are not allowed with MTE to avoid races
>> -	 * when updating the PG_mte_tagged page flag, see
>> -	 * sanitise_mte_tags for more details.
>> -	 */
>> -	if (vma->vm_flags & VM_SHARED)
>> -		return false;
> 
> I think this is fine with the locking in place (BTW, it may be worth
> mentioning in the commit message that it's a relaxation of the ABI). I'd
> like Steven to have a look as well when he gets the time, in case we
> missed anything on the KVM+MTE side.
> 
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Looks fine to me, and thanks for doing the work: I was never very
pleased with the !VM_SHARED restriction, but I couldn't figure a good
way of getting the locking to work.

Reviewed-by: Steven Price <steven.price@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 4/7] arm64: mte: Lock a page for MTE tag initialisation
  2022-09-02 14:47   ` Steven Price
@ 2022-09-02 16:28     ` Catalin Marinas
  2022-09-02 16:58       ` Catalin Marinas
  0 siblings, 1 reply; 32+ messages in thread
From: Catalin Marinas @ 2022-09-02 16:28 UTC (permalink / raw)
  To: Steven Price
  Cc: Peter Collingbourne, linux-arm-kernel, kvmarm, Cornelia Huck,
	Will Deacon, Marc Zyngier, Evgenii Stepanov, kvm,
	Vincenzo Frascino

On Fri, Sep 02, 2022 at 03:47:33PM +0100, Steven Price wrote:
> On 10/08/2022 20:30, Peter Collingbourne wrote:
> > diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
> > index a78c1db23c68..cd5ad0936e16 100644
> > --- a/arch/arm64/mm/mteswap.c
> > +++ b/arch/arm64/mm/mteswap.c
> > @@ -53,6 +53,9 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page)
> >  	if (!tags)
> >  		return false;
> >  
> > +	/* racing tag restoring? */
> > +	if (!try_page_mte_tagging(page))
> > +		return false;
> >  	mte_restore_page_tags(page_address(page), tags);
> 
> I feel like adding a "set_page_mte_tagged(page);" in here would avoid
> the need for the comments about mte_restore_tags() taking the lock.

Good point. I think I blindly followed the set_bit() places but it makes
sense to move the bit setting to mte_restore_tags().

Thanks for the review.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 4/7] arm64: mte: Lock a page for MTE tag initialisation
  2022-09-02 16:28     ` Catalin Marinas
@ 2022-09-02 16:58       ` Catalin Marinas
  2022-09-05  7:37         ` Steven Price
  0 siblings, 1 reply; 32+ messages in thread
From: Catalin Marinas @ 2022-09-02 16:58 UTC (permalink / raw)
  To: Steven Price
  Cc: Peter Collingbourne, linux-arm-kernel, kvmarm, Cornelia Huck,
	Will Deacon, Marc Zyngier, Evgenii Stepanov, kvm,
	Vincenzo Frascino

On Fri, Sep 02, 2022 at 05:28:47PM +0100, Catalin Marinas wrote:
> On Fri, Sep 02, 2022 at 03:47:33PM +0100, Steven Price wrote:
> > On 10/08/2022 20:30, Peter Collingbourne wrote:
> > > diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
> > > index a78c1db23c68..cd5ad0936e16 100644
> > > --- a/arch/arm64/mm/mteswap.c
> > > +++ b/arch/arm64/mm/mteswap.c
> > > @@ -53,6 +53,9 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page)
> > >  	if (!tags)
> > >  		return false;
> > >  
> > > +	/* racing tag restoring? */
> > > +	if (!try_page_mte_tagging(page))
> > > +		return false;
> > >  	mte_restore_page_tags(page_address(page), tags);
> > 
> > I feel like adding a "set_page_mte_tagged(page);" in here would avoid
> > the need for the comments about mte_restore_tags() taking the lock.
> 
> Good point. I think I blindly followed the set_bit() places but it makes
> sense to move the bit setting to mte_restore_tags().

Something like this (and I need to do some more testing):

diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index b956899467f0..be6560e1ff2b 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -25,7 +25,7 @@ unsigned long mte_copy_tags_to_user(void __user *to, void *from,
 				    unsigned long n);
 int mte_save_tags(struct page *page);
 void mte_save_page_tags(const void *page_addr, void *tag_storage);
-bool mte_restore_tags(swp_entry_t entry, struct page *page);
+void mte_restore_tags(swp_entry_t entry, struct page *page);
 void mte_restore_page_tags(void *page_addr, const void *tag_storage);
 void mte_invalidate_tags(int type, pgoff_t offset);
 void mte_invalidate_tags_area(int type);
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index e6b82ad1e9e6..7d91379382fd 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1049,9 +1049,8 @@ static inline void arch_swap_invalidate_area(int type)
 #define __HAVE_ARCH_SWAP_RESTORE
 static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio)
 {
-	/* mte_restore_tags() takes the PG_mte_lock */
-	if (system_supports_mte() && mte_restore_tags(entry, &folio->page))
-		set_page_mte_tagged(&folio->page);
+	if (system_supports_mte())
+		mte_restore_tags(entry, &folio->page);
 }
 
 #endif /* CONFIG_ARM64_MTE */
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index 634e089b5933..54ab6c4741db 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -41,11 +41,8 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
 	if (check_swap && is_swap_pte(old_pte)) {
 		swp_entry_t entry = pte_to_swp_entry(old_pte);
 
-		/* mte_restore_tags() takes the PG_mte_lock */
-		if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) {
-			set_page_mte_tagged(page);
-			return;
-		}
+		if (!non_swap_entry(entry))
+			mte_restore_tags(entry, page);
 	}
 
 	if (!pte_is_tagged)
diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
index cd5ad0936e16..cd508ba80ab1 100644
--- a/arch/arm64/mm/mteswap.c
+++ b/arch/arm64/mm/mteswap.c
@@ -46,19 +46,17 @@ int mte_save_tags(struct page *page)
 	return 0;
 }
 
-bool mte_restore_tags(swp_entry_t entry, struct page *page)
+void mte_restore_tags(swp_entry_t entry, struct page *page)
 {
 	void *tags = xa_load(&mte_pages, entry.val);
 
 	if (!tags)
-		return false;
+		return;
 
-	/* racing tag restoring? */
-	if (!try_page_mte_tagging(page))
-		return false;
-	mte_restore_page_tags(page_address(page), tags);
-
-	return true;
+	if (try_page_mte_tagging(page)) {
+		mte_restore_page_tags(page_address(page), tags);
+		set_page_mte_tagged(page);
+	}
 }
 
 void mte_invalidate_tags(int type, pgoff_t offset)

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 4/7] arm64: mte: Lock a page for MTE tag initialisation
  2022-09-02 16:58       ` Catalin Marinas
@ 2022-09-05  7:37         ` Steven Price
  0 siblings, 0 replies; 32+ messages in thread
From: Steven Price @ 2022-09-05  7:37 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Peter Collingbourne, linux-arm-kernel, kvmarm, Cornelia Huck,
	Will Deacon, Marc Zyngier, Evgenii Stepanov, kvm,
	Vincenzo Frascino

On 02/09/2022 17:58, Catalin Marinas wrote:
> On Fri, Sep 02, 2022 at 05:28:47PM +0100, Catalin Marinas wrote:
>> On Fri, Sep 02, 2022 at 03:47:33PM +0100, Steven Price wrote:
>>> On 10/08/2022 20:30, Peter Collingbourne wrote:
>>>> diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
>>>> index a78c1db23c68..cd5ad0936e16 100644
>>>> --- a/arch/arm64/mm/mteswap.c
>>>> +++ b/arch/arm64/mm/mteswap.c
>>>> @@ -53,6 +53,9 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page)
>>>>  	if (!tags)
>>>>  		return false;
>>>>  
>>>> +	/* racing tag restoring? */
>>>> +	if (!try_page_mte_tagging(page))
>>>> +		return false;
>>>>  	mte_restore_page_tags(page_address(page), tags);
>>>
>>> I feel like adding a "set_page_mte_tagged(page);" in here would avoid
>>> the need for the comments about mte_restore_tags() taking the lock.
>>
>> Good point. I think I blindly followed the set_bit() places but it makes
>> sense to move the bit setting to mte_restore_tags().
> 
> Something like this (and I need to do some more testing):

Exactly, that looks equivalent and is hopefully easier to follow.

Steve

> diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
> index b956899467f0..be6560e1ff2b 100644
> --- a/arch/arm64/include/asm/mte.h
> +++ b/arch/arm64/include/asm/mte.h
> @@ -25,7 +25,7 @@ unsigned long mte_copy_tags_to_user(void __user *to, void *from,
>  				    unsigned long n);
>  int mte_save_tags(struct page *page);
>  void mte_save_page_tags(const void *page_addr, void *tag_storage);
> -bool mte_restore_tags(swp_entry_t entry, struct page *page);
> +void mte_restore_tags(swp_entry_t entry, struct page *page);
>  void mte_restore_page_tags(void *page_addr, const void *tag_storage);
>  void mte_invalidate_tags(int type, pgoff_t offset);
>  void mte_invalidate_tags_area(int type);
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index e6b82ad1e9e6..7d91379382fd 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -1049,9 +1049,8 @@ static inline void arch_swap_invalidate_area(int type)
>  #define __HAVE_ARCH_SWAP_RESTORE
>  static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio)
>  {
> -	/* mte_restore_tags() takes the PG_mte_lock */
> -	if (system_supports_mte() && mte_restore_tags(entry, &folio->page))
> -		set_page_mte_tagged(&folio->page);
> +	if (system_supports_mte())
> +		mte_restore_tags(entry, &folio->page);
>  }
>  
>  #endif /* CONFIG_ARM64_MTE */
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 634e089b5933..54ab6c4741db 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -41,11 +41,8 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
>  	if (check_swap && is_swap_pte(old_pte)) {
>  		swp_entry_t entry = pte_to_swp_entry(old_pte);
>  
> -		/* mte_restore_tags() takes the PG_mte_lock */
> -		if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) {
> -			set_page_mte_tagged(page);
> -			return;
> -		}
> +		if (!non_swap_entry(entry))
> +			mte_restore_tags(entry, page);
>  	}
>  
>  	if (!pte_is_tagged)
> diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
> index cd5ad0936e16..cd508ba80ab1 100644
> --- a/arch/arm64/mm/mteswap.c
> +++ b/arch/arm64/mm/mteswap.c
> @@ -46,19 +46,17 @@ int mte_save_tags(struct page *page)
>  	return 0;
>  }
>  
> -bool mte_restore_tags(swp_entry_t entry, struct page *page)
> +void mte_restore_tags(swp_entry_t entry, struct page *page)
>  {
>  	void *tags = xa_load(&mte_pages, entry.val);
>  
>  	if (!tags)
> -		return false;
> +		return;
>  
> -	/* racing tag restoring? */
> -	if (!try_page_mte_tagging(page))
> -		return false;
> -	mte_restore_page_tags(page_address(page), tags);
> -
> -	return true;
> +	if (try_page_mte_tagging(page)) {
> +		mte_restore_page_tags(page_address(page), tags);
> +		set_page_mte_tagged(page);
> +	}
>  }
>  
>  void mte_invalidate_tags(int type, pgoff_t offset)


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 3/7] mm: Add PG_arch_3 page flag
  2022-09-01 17:59     ` Catalin Marinas
@ 2022-09-05 17:01       ` Catalin Marinas
  2022-09-19 18:12         ` Marc Zyngier
  0 siblings, 1 reply; 32+ messages in thread
From: Catalin Marinas @ 2022-09-05 17:01 UTC (permalink / raw)
  To: kernel test robot
  Cc: Peter Collingbourne, linux-arm-kernel, kvmarm, kbuild-all,
	Cornelia Huck, Will Deacon, Marc Zyngier, Evgenii Stepanov, kvm,
	Steven Price, Vincenzo Frascino

On Thu, Sep 01, 2022 at 06:59:23PM +0100, Catalin Marinas wrote:
> On Thu, Aug 11, 2022 at 03:16:08PM +0800, kernel test robot wrote:
> > Thank you for the patch! Perhaps something to improve:
> > 
> > [auto build test WARNING on arm64/for-next/core]
> > [also build test WARNING on linus/master next-20220811]
> > [cannot apply to kvmarm/next arm/for-next soc/for-next xilinx-xlnx/master v5.19]
> > [If your patch is applied to the wrong git tree, kindly drop us a note.
> > And when submitting patch, we suggest to use '--base' as documented in
> > https://git-scm.com/docs/git-format-patch#_base_tree_information]
> > 
> > url:    https://github.com/intel-lab-lkp/linux/commits/Peter-Collingbourne/KVM-arm64-permit-MAP_SHARED-mappings-with-MTE-enabled/20220811-033310
> > base:   https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
> > config: loongarch-defconfig (https://download.01.org/0day-ci/archive/20220811/202208111500.62e0Bl2l-lkp@intel.com/config)
> > compiler: loongarch64-linux-gcc (GCC) 12.1.0
> > reproduce (this is a W=1 build):
> >         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> >         chmod +x ~/bin/make.cross
> >         # https://github.com/intel-lab-lkp/linux/commit/1a400517d8428df0ec9f86f8d303b2227ee9702f
> >         git remote add linux-review https://github.com/intel-lab-lkp/linux
> >         git fetch --no-tags linux-review Peter-Collingbourne/KVM-arm64-permit-MAP_SHARED-mappings-with-MTE-enabled/20220811-033310
> >         git checkout 1a400517d8428df0ec9f86f8d303b2227ee9702f
> >         # save the config file
> >         mkdir build_dir && cp config build_dir/.config
> >         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch SHELL=/bin/bash
> > 
> > If you fix the issue, kindly add following tag where applicable
> > Reported-by: kernel test robot <lkp@intel.com>
> > 
> > All warnings (new ones prefixed by >>):
> > 
> > >> mm/memory.c:92:2: warning: #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid. [-Wcpp]
> >       92 | #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
> >          |  ^~~~~~~
> > 
> > 
> > vim +92 mm/memory.c
> > 
> > 42b7772812d15b Jan Beulich    2008-07-23  90  
> > af27d9403f5b80 Arnd Bergmann  2018-02-16  91  #if defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS) && !defined(CONFIG_COMPILE_TEST)
> > 90572890d20252 Peter Zijlstra 2013-10-07 @92  #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
> > 75980e97daccfc Peter Zijlstra 2013-02-22  93  #endif
> > 75980e97daccfc Peter Zijlstra 2013-02-22  94  
> 
> It looks like ith CONFIG_NUMA_BALANCING=y on loongarch we run out of
> spare bits in page->flags to fit last_cpupid. The reason we don't see it
> on arm64 is that we select SPARSEMEM_VMEMMAP and SECTIONS_WIDTH becomes
> 0. On loongarch SECTIONS_WIDTH takes 19 bits (48 - 29) in page->flags.
> 
> I think instead of always defining PG_arch_{2,3} if CONFIG_64BIT, we
> could add a CONFIG_ARCH_WANTS_PG_ARCH_23 option and only select it on
> arm64 for the time being.

I pushed a patch as the first one on the arm64 devel/mte-pg-flags
branch. Also updated the last patch on this branch following Steven's
comments.

Peter, please let me know if you want to pick this series up together
with your other KVM patches. Otherwise I can post it separately, it's
worth merging it on its own as it clarifies the page flag vs tag setting
ordering.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 6/7] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled
  2022-08-10 19:30 ` [PATCH v3 6/7] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled Peter Collingbourne
  2022-09-02 13:45   ` Catalin Marinas
@ 2022-09-12 16:23   ` Marc Zyngier
  2022-09-13  4:10     ` Peter Collingbourne
  1 sibling, 1 reply; 32+ messages in thread
From: Marc Zyngier @ 2022-09-12 16:23 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: linux-arm-kernel, kvmarm, Cornelia Huck, Catalin Marinas,
	Will Deacon, Evgenii Stepanov, kvm, Steven Price,
	Vincenzo Frascino

On Wed, 10 Aug 2022 20:30:32 +0100,
Peter Collingbourne <pcc@google.com> wrote:
> 
> Certain VMMs such as crosvm have features (e.g. sandboxing) that depend
> on being able to map guest memory as MAP_SHARED. The current restriction
> on sharing MAP_SHARED pages with the guest is preventing the use of
> those features with MTE. Now that the races between tasks concurrently
> clearing tags on the same page have been fixed, remove this restriction.
> 
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> ---
>  arch/arm64/kvm/mmu.c | 8 --------
>  1 file changed, 8 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index d54be80e31dd..fc65dc20655d 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1075,14 +1075,6 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
>  
>  static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
>  {
> -	/*
> -	 * VM_SHARED mappings are not allowed with MTE to avoid races
> -	 * when updating the PG_mte_tagged page flag, see
> -	 * sanitise_mte_tags for more details.
> -	 */
> -	if (vma->vm_flags & VM_SHARED)
> -		return false;
> -
>  	return vma->vm_flags & VM_MTE_ALLOWED;
>  }
>  

Can you provide a pointer to some VMM making use of this functionality
and enabling MTE? A set of crosvm patches (for example) would be
useful to evaluate this series.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 6/7] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled
  2022-09-12 16:23   ` Marc Zyngier
@ 2022-09-13  4:10     ` Peter Collingbourne
  0 siblings, 0 replies; 32+ messages in thread
From: Peter Collingbourne @ 2022-09-13  4:10 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Linux ARM, kvmarm, Cornelia Huck, Catalin Marinas, Will Deacon,
	Evgenii Stepanov, kvm, Steven Price, Vincenzo Frascino

On Mon, Sep 12, 2022 at 9:23 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Wed, 10 Aug 2022 20:30:32 +0100,
> Peter Collingbourne <pcc@google.com> wrote:
> >
> > Certain VMMs such as crosvm have features (e.g. sandboxing) that depend
> > on being able to map guest memory as MAP_SHARED. The current restriction
> > on sharing MAP_SHARED pages with the guest is preventing the use of
> > those features with MTE. Now that the races between tasks concurrently
> > clearing tags on the same page have been fixed, remove this restriction.
> >
> > Signed-off-by: Peter Collingbourne <pcc@google.com>
> > ---
> >  arch/arm64/kvm/mmu.c | 8 --------
> >  1 file changed, 8 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index d54be80e31dd..fc65dc20655d 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1075,14 +1075,6 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> >
> >  static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
> >  {
> > -     /*
> > -      * VM_SHARED mappings are not allowed with MTE to avoid races
> > -      * when updating the PG_mte_tagged page flag, see
> > -      * sanitise_mte_tags for more details.
> > -      */
> > -     if (vma->vm_flags & VM_SHARED)
> > -             return false;
> > -
> >       return vma->vm_flags & VM_MTE_ALLOWED;
> >  }
> >
>
> Can you provide a pointer to some VMM making use of this functionality
> and enabling MTE? A set of crosvm patches (for example) would be
> useful to evaluate this series.

Hi Marc,

I've been using a modified crosvm to test this series. Please find
below a link to the proposed crosvm patches which make use of the
series:
https://chromium-review.googlesource.com/c/crosvm/crosvm/+/3892141

Peter

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 3/7] mm: Add PG_arch_3 page flag
  2022-09-05 17:01       ` Catalin Marinas
@ 2022-09-19 18:12         ` Marc Zyngier
  2022-09-20 15:39           ` Catalin Marinas
  0 siblings, 1 reply; 32+ messages in thread
From: Marc Zyngier @ 2022-09-19 18:12 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: kernel test robot, Peter Collingbourne, linux-arm-kernel, kvmarm,
	kbuild-all, Cornelia Huck, Will Deacon, Evgenii Stepanov, kvm,
	Steven Price, Vincenzo Frascino

On Mon, 05 Sep 2022 18:01:55 +0100,
Catalin Marinas <catalin.marinas@arm.com> wrote:
> 
> On Thu, Sep 01, 2022 at 06:59:23PM +0100, Catalin Marinas wrote:
> > On Thu, Aug 11, 2022 at 03:16:08PM +0800, kernel test robot wrote:
> > > Thank you for the patch! Perhaps something to improve:
> > > 
> > > [auto build test WARNING on arm64/for-next/core]
> > > [also build test WARNING on linus/master next-20220811]
> > > [cannot apply to kvmarm/next arm/for-next soc/for-next xilinx-xlnx/master v5.19]
> > > [If your patch is applied to the wrong git tree, kindly drop us a note.
> > > And when submitting patch, we suggest to use '--base' as documented in
> > > https://git-scm.com/docs/git-format-patch#_base_tree_information]
> > > 
> > > url:    https://github.com/intel-lab-lkp/linux/commits/Peter-Collingbourne/KVM-arm64-permit-MAP_SHARED-mappings-with-MTE-enabled/20220811-033310
> > > base:   https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
> > > config: loongarch-defconfig (https://download.01.org/0day-ci/archive/20220811/202208111500.62e0Bl2l-lkp@intel.com/config)
> > > compiler: loongarch64-linux-gcc (GCC) 12.1.0
> > > reproduce (this is a W=1 build):
> > >         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> > >         chmod +x ~/bin/make.cross
> > >         # https://github.com/intel-lab-lkp/linux/commit/1a400517d8428df0ec9f86f8d303b2227ee9702f
> > >         git remote add linux-review https://github.com/intel-lab-lkp/linux
> > >         git fetch --no-tags linux-review Peter-Collingbourne/KVM-arm64-permit-MAP_SHARED-mappings-with-MTE-enabled/20220811-033310
> > >         git checkout 1a400517d8428df0ec9f86f8d303b2227ee9702f
> > >         # save the config file
> > >         mkdir build_dir && cp config build_dir/.config
> > >         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch SHELL=/bin/bash
> > > 
> > > If you fix the issue, kindly add following tag where applicable
> > > Reported-by: kernel test robot <lkp@intel.com>
> > > 
> > > All warnings (new ones prefixed by >>):
> > > 
> > > >> mm/memory.c:92:2: warning: #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid. [-Wcpp]
> > >       92 | #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
> > >          |  ^~~~~~~
> > > 
> > > 
> > > vim +92 mm/memory.c
> > > 
> > > 42b7772812d15b Jan Beulich    2008-07-23  90  
> > > af27d9403f5b80 Arnd Bergmann  2018-02-16  91  #if defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS) && !defined(CONFIG_COMPILE_TEST)
> > > 90572890d20252 Peter Zijlstra 2013-10-07 @92  #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
> > > 75980e97daccfc Peter Zijlstra 2013-02-22  93  #endif
> > > 75980e97daccfc Peter Zijlstra 2013-02-22  94  
> > 
> > It looks like ith CONFIG_NUMA_BALANCING=y on loongarch we run out of
> > spare bits in page->flags to fit last_cpupid. The reason we don't see it
> > on arm64 is that we select SPARSEMEM_VMEMMAP and SECTIONS_WIDTH becomes
> > 0. On loongarch SECTIONS_WIDTH takes 19 bits (48 - 29) in page->flags.
> > 
> > I think instead of always defining PG_arch_{2,3} if CONFIG_64BIT, we
> > could add a CONFIG_ARCH_WANTS_PG_ARCH_23 option and only select it on
> > arm64 for the time being.
> 
> I pushed a patch as the first one on the arm64 devel/mte-pg-flags
> branch. Also updated the last patch on this branch following Steven's
> comments.
> 
> Peter, please let me know if you want to pick this series up together
> with your other KVM patches. Otherwise I can post it separately, it's
> worth merging it on its own as it clarifies the page flag vs tag setting
> ordering.

I'm looking at queuing this, but I'm confused by this comment. Do I
need to pick this as part of the series? Or is this an independent
thing (my hunch is that it is actually required not to break other
architectures...).

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 3/7] mm: Add PG_arch_3 page flag
  2022-09-19 18:12         ` Marc Zyngier
@ 2022-09-20 15:39           ` Catalin Marinas
  2022-09-20 16:33             ` Marc Zyngier
  0 siblings, 1 reply; 32+ messages in thread
From: Catalin Marinas @ 2022-09-20 15:39 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kernel test robot, Peter Collingbourne, linux-arm-kernel, kvmarm,
	kbuild-all, Cornelia Huck, Will Deacon, Evgenii Stepanov, kvm,
	Steven Price, Vincenzo Frascino

On Mon, Sep 19, 2022 at 07:12:53PM +0100, Marc Zyngier wrote:
> On Mon, 05 Sep 2022 18:01:55 +0100,
> Catalin Marinas <catalin.marinas@arm.com> wrote:
> > Peter, please let me know if you want to pick this series up together
> > with your other KVM patches. Otherwise I can post it separately, it's
> > worth merging it on its own as it clarifies the page flag vs tag setting
> > ordering.
> 
> I'm looking at queuing this, but I'm confused by this comment. Do I
> need to pick this as part of the series? Or is this an independent
> thing (my hunch is that it is actually required not to break other
> architectures...).

This series series (at least the first patches) won't apply cleanly on
top of 6.0-rc1 and, of course, we shouldn't break other architectures. I
can repost the whole series but I don't have the setup to test the
MAP_SHARED KVM option (unless Peter plans to post it soon).

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 3/7] mm: Add PG_arch_3 page flag
  2022-09-20 15:39           ` Catalin Marinas
@ 2022-09-20 16:33             ` Marc Zyngier
  2022-09-20 16:58               ` Catalin Marinas
  0 siblings, 1 reply; 32+ messages in thread
From: Marc Zyngier @ 2022-09-20 16:33 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: kernel test robot, Peter Collingbourne, linux-arm-kernel, kvmarm,
	kbuild-all, Cornelia Huck, Will Deacon, Evgenii Stepanov, kvm,
	Steven Price, Vincenzo Frascino

On Tue, 20 Sep 2022 16:39:47 +0100,
Catalin Marinas <catalin.marinas@arm.com> wrote:
> 
> On Mon, Sep 19, 2022 at 07:12:53PM +0100, Marc Zyngier wrote:
> > On Mon, 05 Sep 2022 18:01:55 +0100,
> > Catalin Marinas <catalin.marinas@arm.com> wrote:
> > > Peter, please let me know if you want to pick this series up together
> > > with your other KVM patches. Otherwise I can post it separately, it's
> > > worth merging it on its own as it clarifies the page flag vs tag setting
> > > ordering.
> > 
> > I'm looking at queuing this, but I'm confused by this comment. Do I
> > need to pick this as part of the series? Or is this an independent
> > thing (my hunch is that it is actually required not to break other
> > architectures...).
> 
> This series series (at least the first patches) won't apply cleanly on
> top of 6.0-rc1 and, of course, we shouldn't break other architectures. I
> can repost the whole series but I don't have the setup to test the
> MAP_SHARED KVM option (unless Peter plans to post it soon).

I don't feel brave enough to take a series affecting all architectures
so late in the game, and the whole thing had very little arm64
exposure. The latest QEMU doesn't seem to work anymore, so I don't
have any MTE-capable emulation (and using the FVP remotely is a pain
in the proverbial neck).

I'll come back to this after the merge window, should Peter decide to
respin the series.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 3/7] mm: Add PG_arch_3 page flag
  2022-09-20 16:33             ` Marc Zyngier
@ 2022-09-20 16:58               ` Catalin Marinas
  2022-09-21  3:53                 ` Peter Collingbourne
  0 siblings, 1 reply; 32+ messages in thread
From: Catalin Marinas @ 2022-09-20 16:58 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kernel test robot, Peter Collingbourne, linux-arm-kernel, kvmarm,
	kbuild-all, Cornelia Huck, Will Deacon, Evgenii Stepanov, kvm,
	Steven Price, Vincenzo Frascino

On Tue, Sep 20, 2022 at 05:33:42PM +0100, Marc Zyngier wrote:
> On Tue, 20 Sep 2022 16:39:47 +0100,
> Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Mon, Sep 19, 2022 at 07:12:53PM +0100, Marc Zyngier wrote:
> > > On Mon, 05 Sep 2022 18:01:55 +0100,
> > > Catalin Marinas <catalin.marinas@arm.com> wrote:
> > > > Peter, please let me know if you want to pick this series up together
> > > > with your other KVM patches. Otherwise I can post it separately, it's
> > > > worth merging it on its own as it clarifies the page flag vs tag setting
> > > > ordering.
> > > 
> > > I'm looking at queuing this, but I'm confused by this comment. Do I
> > > need to pick this as part of the series? Or is this an independent
> > > thing (my hunch is that it is actually required not to break other
> > > architectures...).
> > 
> > This series series (at least the first patches) won't apply cleanly on
> > top of 6.0-rc1 and, of course, we shouldn't break other architectures. I
> > can repost the whole series but I don't have the setup to test the
> > MAP_SHARED KVM option (unless Peter plans to post it soon).
> 
> I don't feel brave enough to take a series affecting all architectures

It shouldn't affect the others, the only change is that PG_arch_2 is now
only defined for arm64 but no other architecture is using it. The
problem with loongarch is that it doesn't have enough spare bits in
page->flags and even without any patches I think it's broken with the
right value for NR_CPUS.

> so late in the game, and the whole thing had very little arm64
> exposure. The latest QEMU doesn't seem to work anymore, so I don't
> have any MTE-capable emulation (and using the FVP remotely is a pain
> in the proverbial neck).
> 
> I'll come back to this after the merge window, should Peter decide to
> respin the series.

It makes sense.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v3 3/7] mm: Add PG_arch_3 page flag
  2022-09-20 16:58               ` Catalin Marinas
@ 2022-09-21  3:53                 ` Peter Collingbourne
  0 siblings, 0 replies; 32+ messages in thread
From: Peter Collingbourne @ 2022-09-21  3:53 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Marc Zyngier, kernel test robot, Linux ARM, kvmarm, kbuild-all,
	Cornelia Huck, Will Deacon, Evgenii Stepanov, kvm, Steven Price,
	Vincenzo Frascino

On Tue, Sep 20, 2022 at 9:58 AM Catalin Marinas <catalin.marinas@arm.com> wrote:
>
> On Tue, Sep 20, 2022 at 05:33:42PM +0100, Marc Zyngier wrote:
> > On Tue, 20 Sep 2022 16:39:47 +0100,
> > Catalin Marinas <catalin.marinas@arm.com> wrote:
> > > On Mon, Sep 19, 2022 at 07:12:53PM +0100, Marc Zyngier wrote:
> > > > On Mon, 05 Sep 2022 18:01:55 +0100,
> > > > Catalin Marinas <catalin.marinas@arm.com> wrote:
> > > > > Peter, please let me know if you want to pick this series up together
> > > > > with your other KVM patches. Otherwise I can post it separately, it's
> > > > > worth merging it on its own as it clarifies the page flag vs tag setting
> > > > > ordering.
> > > >
> > > > I'm looking at queuing this, but I'm confused by this comment. Do I
> > > > need to pick this as part of the series? Or is this an independent
> > > > thing (my hunch is that it is actually required not to break other
> > > > architectures...).
> > >
> > > This series series (at least the first patches) won't apply cleanly on
> > > top of 6.0-rc1 and, of course, we shouldn't break other architectures. I
> > > can repost the whole series but I don't have the setup to test the
> > > MAP_SHARED KVM option (unless Peter plans to post it soon).
> >
> > I don't feel brave enough to take a series affecting all architectures
>
> It shouldn't affect the others, the only change is that PG_arch_2 is now
> only defined for arm64 but no other architecture is using it. The
> problem with loongarch is that it doesn't have enough spare bits in
> page->flags and even without any patches I think it's broken with the
> right value for NR_CPUS.
>
> > so late in the game, and the whole thing had very little arm64
> > exposure. The latest QEMU doesn't seem to work anymore, so I don't
> > have any MTE-capable emulation (and using the FVP remotely is a pain
> > in the proverbial neck).
> >
> > I'll come back to this after the merge window, should Peter decide to
> > respin the series.
>
> It makes sense.

Apologies for the delay, I've now sent out v4 of this series which
includes the patches on your branch.

Peter

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2022-09-21  3:57 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-10 19:30 [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Peter Collingbourne
2022-08-10 19:30 ` [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics Peter Collingbourne
2022-09-01 15:49   ` Catalin Marinas
2022-09-02 10:26   ` Cornelia Huck
2022-09-02 14:47   ` Steven Price
2022-08-10 19:30 ` [PATCH v3 2/7] KVM: arm64: Simplify the sanitise_mte_tags() logic Peter Collingbourne
2022-09-02 14:47   ` Steven Price
2022-08-10 19:30 ` [PATCH v3 3/7] mm: Add PG_arch_3 page flag Peter Collingbourne
2022-08-11  7:16   ` kernel test robot
2022-09-01 17:59     ` Catalin Marinas
2022-09-05 17:01       ` Catalin Marinas
2022-09-19 18:12         ` Marc Zyngier
2022-09-20 15:39           ` Catalin Marinas
2022-09-20 16:33             ` Marc Zyngier
2022-09-20 16:58               ` Catalin Marinas
2022-09-21  3:53                 ` Peter Collingbourne
2022-08-10 19:30 ` [PATCH v3 4/7] arm64: mte: Lock a page for MTE tag initialisation Peter Collingbourne
2022-09-02 14:47   ` Steven Price
2022-09-02 16:28     ` Catalin Marinas
2022-09-02 16:58       ` Catalin Marinas
2022-09-05  7:37         ` Steven Price
2022-08-10 19:30 ` [PATCH v3 5/7] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled Peter Collingbourne
2022-09-02 13:41   ` Catalin Marinas
2022-09-02 14:47   ` Steven Price
2022-08-10 19:30 ` [PATCH v3 6/7] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled Peter Collingbourne
2022-09-02 13:45   ` Catalin Marinas
2022-09-02 14:47     ` Steven Price
2022-09-12 16:23   ` Marc Zyngier
2022-09-13  4:10     ` Peter Collingbourne
2022-08-10 19:30 ` [PATCH v3 7/7] Documentation: document the ABI changes for KVM_CAP_ARM_MTE Peter Collingbourne
2022-09-02 13:49   ` Catalin Marinas
2022-09-02 14:05 ` [PATCH v3 0/7] KVM: arm64: permit MAP_SHARED mappings with MTE enabled Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).