All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Yu Zhao <yuzhao@google.com>,
	"Yin, Fengwei" <fengwei.yin@intel.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org
Subject: [RFC v2 PATCH 11/17] mm: Split __wp_page_copy_user() into 2 variants
Date: Fri, 14 Apr 2023 14:02:57 +0100	[thread overview]
Message-ID: <20230414130303.2345383-12-ryan.roberts@arm.com> (raw)
In-Reply-To: <20230414130303.2345383-1-ryan.roberts@arm.com>

We will soon support CoWing large folios, so will need support for
copying a contiguous range of pages in the case where there is a source
folio. Therefore, split __wp_page_copy_user() into 2 variants:

__wp_page_copy_user_pfn() copies a single pfn to a destination page.
This is used when CoWing from a source without a folio, and is always
only a single page copy.

__wp_page_copy_user_range() copies a range of pages from source to
destination and is used when the source has an underlying folio. For now
it is only used to copy a single page, but this will change in a future
commit.

In both cases, kmsan_copy_page_meta() is moved into these helper
functions so that the caller does not need to be concerned with calling
it multiple times for the range case.

No functional changes intended.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 mm/memory.c | 41 +++++++++++++++++++++++++++++------------
 1 file changed, 29 insertions(+), 12 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 7e2af54fe2e0..f2b7cfb2efc0 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2786,14 +2786,34 @@ static inline int pte_unmap_same(struct vm_fault *vmf)
 	return same;
 }

+/*
+ * Return:
+ *	0:		copied succeeded
+ *	-EHWPOISON:	copy failed due to hwpoison in source page
+ */
+static inline int __wp_page_copy_user_range(struct page *dst, struct page *src,
+						int nr, unsigned long addr,
+						struct vm_area_struct *vma)
+{
+	for (; nr != 0; nr--, dst++, src++, addr += PAGE_SIZE) {
+		if (copy_mc_user_highpage(dst, src, addr, vma)) {
+			memory_failure_queue(page_to_pfn(src), 0);
+			return -EHWPOISON;
+		}
+		kmsan_copy_page_meta(dst, src);
+	}
+
+	return 0;
+}
+
 /*
  * Return:
  *	0:		copied succeeded
  *	-EHWPOISON:	copy failed due to hwpoison in source page
  *	-EAGAIN:	copied failed (some other reason)
  */
-static inline int __wp_page_copy_user(struct page *dst, struct page *src,
-				      struct vm_fault *vmf)
+static inline int __wp_page_copy_user_pfn(struct page *dst,
+						struct vm_fault *vmf)
 {
 	int ret;
 	void *kaddr;
@@ -2803,14 +2823,6 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
 	struct mm_struct *mm = vma->vm_mm;
 	unsigned long addr = vmf->address;

-	if (likely(src)) {
-		if (copy_mc_user_highpage(dst, src, addr, vma)) {
-			memory_failure_queue(page_to_pfn(src), 0);
-			return -EHWPOISON;
-		}
-		return 0;
-	}
-
 	/*
 	 * If the source page was a PFN mapping, we don't have
 	 * a "struct page" for it. We do a best-effort copy by
@@ -2879,6 +2891,7 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
 		}
 	}

+	kmsan_copy_page_meta(dst, NULL);
 	ret = 0;

 pte_unlock:
@@ -3372,7 +3385,12 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 		if (!new_folio)
 			goto oom;

-		ret = __wp_page_copy_user(&new_folio->page, vmf->page, vmf);
+		if (likely(old_folio))
+			ret = __wp_page_copy_user_range(&new_folio->page,
+							vmf->page,
+							1, vmf->address, vma);
+		else
+			ret = __wp_page_copy_user_pfn(&new_folio->page, vmf);
 		if (ret) {
 			/*
 			 * COW failed, if the fault was solved by other,
@@ -3388,7 +3406,6 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 			delayacct_wpcopy_end();
 			return ret == -EHWPOISON ? VM_FAULT_HWPOISON : 0;
 		}
-		kmsan_copy_page_meta(&new_folio->page, vmf->page);
 	}

 	if (mem_cgroup_charge(new_folio, mm, GFP_KERNEL))
--
2.25.1



WARNING: multiple messages have this Message-ID (diff)
From: Ryan Roberts <ryan.roberts@arm.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Yu Zhao <yuzhao@google.com>,
	"Yin, Fengwei" <fengwei.yin@intel.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org
Subject: [RFC v2 PATCH 11/17] mm: Split __wp_page_copy_user() into 2 variants
Date: Fri, 14 Apr 2023 14:02:57 +0100	[thread overview]
Message-ID: <20230414130303.2345383-12-ryan.roberts@arm.com> (raw)
In-Reply-To: <20230414130303.2345383-1-ryan.roberts@arm.com>

We will soon support CoWing large folios, so will need support for
copying a contiguous range of pages in the case where there is a source
folio. Therefore, split __wp_page_copy_user() into 2 variants:

__wp_page_copy_user_pfn() copies a single pfn to a destination page.
This is used when CoWing from a source without a folio, and is always
only a single page copy.

__wp_page_copy_user_range() copies a range of pages from source to
destination and is used when the source has an underlying folio. For now
it is only used to copy a single page, but this will change in a future
commit.

In both cases, kmsan_copy_page_meta() is moved into these helper
functions so that the caller does not need to be concerned with calling
it multiple times for the range case.

No functional changes intended.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 mm/memory.c | 41 +++++++++++++++++++++++++++++------------
 1 file changed, 29 insertions(+), 12 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 7e2af54fe2e0..f2b7cfb2efc0 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2786,14 +2786,34 @@ static inline int pte_unmap_same(struct vm_fault *vmf)
 	return same;
 }

+/*
+ * Return:
+ *	0:		copied succeeded
+ *	-EHWPOISON:	copy failed due to hwpoison in source page
+ */
+static inline int __wp_page_copy_user_range(struct page *dst, struct page *src,
+						int nr, unsigned long addr,
+						struct vm_area_struct *vma)
+{
+	for (; nr != 0; nr--, dst++, src++, addr += PAGE_SIZE) {
+		if (copy_mc_user_highpage(dst, src, addr, vma)) {
+			memory_failure_queue(page_to_pfn(src), 0);
+			return -EHWPOISON;
+		}
+		kmsan_copy_page_meta(dst, src);
+	}
+
+	return 0;
+}
+
 /*
  * Return:
  *	0:		copied succeeded
  *	-EHWPOISON:	copy failed due to hwpoison in source page
  *	-EAGAIN:	copied failed (some other reason)
  */
-static inline int __wp_page_copy_user(struct page *dst, struct page *src,
-				      struct vm_fault *vmf)
+static inline int __wp_page_copy_user_pfn(struct page *dst,
+						struct vm_fault *vmf)
 {
 	int ret;
 	void *kaddr;
@@ -2803,14 +2823,6 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
 	struct mm_struct *mm = vma->vm_mm;
 	unsigned long addr = vmf->address;

-	if (likely(src)) {
-		if (copy_mc_user_highpage(dst, src, addr, vma)) {
-			memory_failure_queue(page_to_pfn(src), 0);
-			return -EHWPOISON;
-		}
-		return 0;
-	}
-
 	/*
 	 * If the source page was a PFN mapping, we don't have
 	 * a "struct page" for it. We do a best-effort copy by
@@ -2879,6 +2891,7 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
 		}
 	}

+	kmsan_copy_page_meta(dst, NULL);
 	ret = 0;

 pte_unlock:
@@ -3372,7 +3385,12 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 		if (!new_folio)
 			goto oom;

-		ret = __wp_page_copy_user(&new_folio->page, vmf->page, vmf);
+		if (likely(old_folio))
+			ret = __wp_page_copy_user_range(&new_folio->page,
+							vmf->page,
+							1, vmf->address, vma);
+		else
+			ret = __wp_page_copy_user_pfn(&new_folio->page, vmf);
 		if (ret) {
 			/*
 			 * COW failed, if the fault was solved by other,
@@ -3388,7 +3406,6 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 			delayacct_wpcopy_end();
 			return ret == -EHWPOISON ? VM_FAULT_HWPOISON : 0;
 		}
-		kmsan_copy_page_meta(&new_folio->page, vmf->page);
 	}

 	if (mem_cgroup_charge(new_folio, mm, GFP_KERNEL))
--
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2023-04-14 13:03 UTC|newest]

Thread overview: 88+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-14 13:02 [RFC v2 PATCH 00/17] variable-order, large folios for anonymous memory Ryan Roberts
2023-04-14 13:02 ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 01/17] mm: Expose clear_huge_page() unconditionally Ryan Roberts
2023-04-14 13:02   ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 02/17] mm: pass gfp flags and order to vma_alloc_zeroed_movable_folio() Ryan Roberts
2023-04-14 13:02   ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 03/17] mm: Introduce try_vma_alloc_movable_folio() Ryan Roberts
2023-04-14 13:02   ` Ryan Roberts
2023-04-17  8:49   ` Yin, Fengwei
2023-04-17  8:49     ` Yin, Fengwei
2023-04-17 10:11     ` Ryan Roberts
2023-04-17 10:11       ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 04/17] mm: Implement folio_add_new_anon_rmap_range() Ryan Roberts
2023-04-14 13:02   ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 05/17] mm: Routines to determine max anon folio allocation order Ryan Roberts
2023-04-14 13:02   ` Ryan Roberts
2023-04-14 14:09   ` Kirill A. Shutemov
2023-04-14 14:09     ` Kirill A. Shutemov
2023-04-14 14:38     ` Ryan Roberts
2023-04-14 14:38       ` Ryan Roberts
2023-04-14 15:37       ` Kirill A. Shutemov
2023-04-14 15:37         ` Kirill A. Shutemov
2023-04-14 16:06         ` Ryan Roberts
2023-04-14 16:06           ` Ryan Roberts
2023-04-14 16:18           ` Matthew Wilcox
2023-04-14 16:18             ` Matthew Wilcox
2023-04-14 16:31             ` Ryan Roberts
2023-04-14 16:31               ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 06/17] mm: Allocate large folios for anonymous memory Ryan Roberts
2023-04-14 13:02   ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 07/17] mm: Allow deferred splitting of arbitrary large anon folios Ryan Roberts
2023-04-14 13:02   ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 08/17] mm: Implement folio_move_anon_rmap_range() Ryan Roberts
2023-04-14 13:02   ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 09/17] mm: Update wp_page_reuse() to operate on range of pages Ryan Roberts
2023-04-14 13:02   ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 10/17] mm: Reuse large folios for anonymous memory Ryan Roberts
2023-04-14 13:02   ` Ryan Roberts
2023-04-14 13:02 ` Ryan Roberts [this message]
2023-04-14 13:02   ` [RFC v2 PATCH 11/17] mm: Split __wp_page_copy_user() into 2 variants Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 12/17] mm: ptep_clear_flush_range_notify() macro for batch operation Ryan Roberts
2023-04-14 13:02   ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 13/17] mm: Implement folio_remove_rmap_range() Ryan Roberts
2023-04-14 13:02   ` Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 14/17] mm: Copy large folios for anonymous memory Ryan Roberts
2023-04-14 13:03   ` Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 15/17] mm: Convert zero page to large folios on write Ryan Roberts
2023-04-14 13:03   ` Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 16/17] mm: mmap: Align unhinted maps to highest anon folio order Ryan Roberts
2023-04-14 13:03   ` Ryan Roberts
2023-04-17  8:25   ` Yin, Fengwei
2023-04-17  8:25     ` Yin, Fengwei
2023-04-17 10:13     ` Ryan Roberts
2023-04-17 10:13       ` Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 17/17] mm: Batch-zap large anonymous folio PTE mappings Ryan Roberts
2023-04-14 13:03   ` Ryan Roberts
2023-04-17  8:04 ` [RFC v2 PATCH 00/17] variable-order, large folios for anonymous memory Yin, Fengwei
2023-04-17  8:04   ` Yin, Fengwei
2023-04-17 10:19   ` Ryan Roberts
2023-04-17 10:19     ` Ryan Roberts
2023-04-17  8:19 ` Yin, Fengwei
2023-04-17  8:19   ` Yin, Fengwei
2023-04-17 10:28   ` Ryan Roberts
2023-04-17 10:28     ` Ryan Roberts
2023-04-17 10:54 ` David Hildenbrand
2023-04-17 10:54   ` David Hildenbrand
2023-04-17 11:43   ` Ryan Roberts
2023-04-17 11:43     ` Ryan Roberts
2023-04-17 14:05     ` David Hildenbrand
2023-04-17 14:05       ` David Hildenbrand
2023-04-17 15:38       ` Ryan Roberts
2023-04-17 15:38         ` Ryan Roberts
2023-04-17 15:44         ` David Hildenbrand
2023-04-17 15:44           ` David Hildenbrand
2023-04-17 16:15           ` Ryan Roberts
2023-04-17 16:15             ` Ryan Roberts
2023-04-26 10:41           ` Ryan Roberts
2023-04-26 10:41             ` Ryan Roberts
2023-05-17 13:58             ` David Hildenbrand
2023-05-17 13:58               ` David Hildenbrand
2023-05-18 11:23               ` Ryan Roberts
2023-05-18 11:23                 ` Ryan Roberts
2023-04-19 10:12       ` Ryan Roberts
2023-04-19 10:12         ` Ryan Roberts
2023-04-19 10:51         ` David Hildenbrand
2023-04-19 10:51           ` David Hildenbrand
2023-04-19 11:13           ` Ryan Roberts
2023-04-19 11:13             ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230414130303.2345383-12-ryan.roberts@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=fengwei.yin@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.