All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/2] Memory poison recovery in khugepaged collapsing
@ 2022-05-27 19:07 Jiaqi Yan
  2022-05-27 19:07 ` [PATCH v4 1/2] mm: khugepaged: recover from poisoned anonymous memory Jiaqi Yan
  2022-05-27 19:07 ` [PATCH v4 2/2] mm: khugepaged: recover from poisoned file-backed memory Jiaqi Yan
  0 siblings, 2 replies; 5+ messages in thread
From: Jiaqi Yan @ 2022-05-27 19:07 UTC (permalink / raw)
  To: shy828301, tongtiangen
  Cc: tony.luck, naoya.horiguchi, kirill.shutemov, linmiaohe, juew,
	jiaqiyan, linux-mm

Problem
=======
Memory DIMMs are subject to multi-bit flips, i.e. memory errors.
As memory size and density increase, the chances of and number of
memory errors increase. The increasing size and density of server
RAM in the data center and cloud have shown increased uncorrectable
memory errors. There are already mechanisms in the kernel to recover
from uncorrectable memory errors. This series of patches provides
the recovery mechanism for the particular kernel agent khugepaged
when it collapses memory pages.

Impact
======
The main reason we chose to make khugepaged collapsing tolerant of
memory failures was its high possibility of accessing poisoned memory
while performing functionally optional compaction actions.
Standard applications typically don't have strict requirements on
the size of its pages. So they are given 4K pages by the kernel.
The kernel is able to improve application performance by either

  1) giving applications 2M pages to begin with, or
  2) collapsing 4K pages into 2M pages when possible.

This collapsing operation is done by khugepaged, a kernel agent that
is constantly scanning memory. When collapsing 4K pages into a 2M page,
it must copy the data from the 4K pages into a physically contiguous
2M page. Therefore, as long as there exists one poisoned cache line in
collapsible 4K pages, khugepaged will eventually access it. The current
impact to users is a machine check exception triggered kernel panic.
However, khugepaged’s compaction operations are not functionally required
kernel actions. Therefore making khugepaged tolerant to poisoned memory
will greatly improve user experience.

This patch series is for cases where khugepaged is the first guy
that detects the memory errors on the poisoned pages. IOW, the pages
are not known to have memory errors when khugepaged collapsing gets to
them. In our observation, this happens frequently when the huge page
ratio of the system is relatively low, which is fairly common in
virtual machines running on cloud.

Solution
========
As stated before, it is less desirable to crash the system only because
khugepaged accesses poisoned pages while it is collapsing 4K pages.
The high level idea of this patch series is to skip the group of pages
(usually 512 4K-size pages) once khugepaged finds one of them is poisoned,
as these pages have become ineligible to be collapsed.

We are also careful to unwind operations khuagepaged has performed before
it detects memory failures. For example, before copying and collapsing
a group of anonymous pages into a huge page, the source pages will be
isolated and their page table is unlinked from their PMD. These operations
need to be undone in order to ensure these pages are not changed/lost from
the perspective of other threads (both user and kernel space). As for
file backed memory pages, there already exists a rollback case. This
patch just extends it so that khugepaged also correctly rolls back when
it fails to copy poisoned 4K pages.

Changelog
=========

v4 changes
- Remove tracepoint of trace_mm_collapse_huge_page_copy, only interpret
  the new result in trace_mm_collapse_huge_page()
- Remove confusing comment, fix nit and typo

v3 changes
- Incorporate feedbacks from Yang Shi <shy828301@gmail.com>
- Add tracepoint for __collapse_huge_page_copy
- Correct comment about mmap_read_lock

v2 changes
- Incorporate feedbacks from Yang Shi <shy828301@gmail.com>
- Only keep copy_highpage_mc
- Adding new scan_result SCAN_COPY_MC
- Defer NR_FILE_THPS update until copying succeeded

Jiaqi Yan (2):
  mm: khugepaged: recover from poisoned anonymous memory
  mm: khugepaged: recover from poisoned file-backed memory

 include/linux/highmem.h            |  19 +++
 include/trace/events/huge_memory.h |   3 +-
 mm/khugepaged.c                    | 203 ++++++++++++++++++++---------
 3 files changed, 166 insertions(+), 59 deletions(-)

-- 
2.36.1.124.g0e6072fb45-goog



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v4 1/2] mm: khugepaged: recover from poisoned anonymous memory
  2022-05-27 19:07 [PATCH v4 0/2] Memory poison recovery in khugepaged collapsing Jiaqi Yan
@ 2022-05-27 19:07 ` Jiaqi Yan
  2022-05-27 20:21   ` Yang Shi
  2022-05-27 19:07 ` [PATCH v4 2/2] mm: khugepaged: recover from poisoned file-backed memory Jiaqi Yan
  1 sibling, 1 reply; 5+ messages in thread
From: Jiaqi Yan @ 2022-05-27 19:07 UTC (permalink / raw)
  To: shy828301, tongtiangen
  Cc: tony.luck, naoya.horiguchi, kirill.shutemov, linmiaohe, juew,
	jiaqiyan, linux-mm

Make __collapse_huge_page_copy return whether
collapsing/copying anonymous pages succeeded,
and make collapse_huge_page handle the return status.

Break existing PTE scan loop into two for-loops.
The first loop copies source pages into target huge page,
and can fail gracefully when running into memory errors in
source pages. If copying all pages succeeds, the second loop
releases and clears up these normal pages.
Otherwise, the second loop does the following to
roll back the page table and page states:
1) re-establish the original PTEs-to-PMD connection.
2) release source pages back to their LRU list.

Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
---
 include/linux/highmem.h            |  19 +++++
 include/trace/events/huge_memory.h |   3 +-
 mm/khugepaged.c                    | 130 ++++++++++++++++++++++-------
 3 files changed, 121 insertions(+), 31 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 39bb9b47fa9cd..0ccb1e92c4b06 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -298,6 +298,25 @@ static inline void copy_highpage(struct page *to, struct page *from)
 
 #endif
 
+/*
+ * Machine check exception handled version of copy_highpage.
+ * Return true if copying page content failed; otherwise false.
+ * Note handling #MC requires arch opt-in.
+ */
+static inline bool copy_highpage_mc(struct page *to, struct page *from)
+{
+	char *vfrom, *vto;
+	unsigned long ret;
+
+	vfrom = kmap_local_page(from);
+	vto = kmap_local_page(to);
+	ret = copy_mc_to_kernel(vto, vfrom, PAGE_SIZE);
+	kunmap_local(vto);
+	kunmap_local(vfrom);
+
+	return ret > 0;
+}
+
 static inline void memcpy_page(struct page *dst_page, size_t dst_off,
 			       struct page *src_page, size_t src_off,
 			       size_t len)
diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
index 4fdb14a81108b..f08687046ce41 100644
--- a/include/trace/events/huge_memory.h
+++ b/include/trace/events/huge_memory.h
@@ -34,7 +34,8 @@
 	EM( SCAN_ALLOC_HUGE_PAGE_FAIL,	"alloc_huge_page_failed")	\
 	EM( SCAN_CGROUP_CHARGE_FAIL,	"ccgroup_charge_failed")	\
 	EM( SCAN_TRUNCATED,		"truncated")			\
-	EMe(SCAN_PAGE_HAS_PRIVATE,	"page_has_private")		\
+	EM( SCAN_PAGE_HAS_PRIVATE,	"page_has_private")		\
+	EMe(SCAN_COPY_MC,		"copy_poisoned_page")		\
 
 #undef EM
 #undef EMe
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 131492fd1148b..0dd28ecc915d1 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -52,6 +52,7 @@ enum scan_result {
 	SCAN_CGROUP_CHARGE_FAIL,
 	SCAN_TRUNCATED,
 	SCAN_PAGE_HAS_PRIVATE,
+	SCAN_COPY_MC,
 };
 
 #define CREATE_TRACE_POINTS
@@ -739,44 +740,99 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 	return 0;
 }
 
-static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
-				      struct vm_area_struct *vma,
-				      unsigned long address,
-				      spinlock_t *ptl,
-				      struct list_head *compound_pagelist)
+/*
+ * __collapse_huge_page_copy - attempts to copy memory contents from normal
+ * pages to a hugepage. Cleanup the normal pages if copying succeeds;
+ * otherwise restore the original page table and release isolated normal pages.
+ * Returns true if copying succeeds, otherwise returns false.
+ *
+ * @pte: starting of the PTEs to copy from
+ * @page: the new hugepage to copy contents to
+ * @pmd: pointer to the new hugepage's PMD
+ * @rollback: the original normal pages' PMD
+ * @address: starting address to copy
+ * @pte_ptl: lock on normal pages' PTEs
+ * @compound_pagelist: list that stores compound pages
+ */
+static bool __collapse_huge_page_copy(pte_t *pte,
+				struct page *page,
+				pmd_t *pmd,
+				pmd_t rollback,
+				struct vm_area_struct *vma,
+				unsigned long address,
+				spinlock_t *pte_ptl,
+				struct list_head *compound_pagelist)
 {
 	struct page *src_page, *tmp;
 	pte_t *_pte;
-	for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
-				_pte++, page++, address += PAGE_SIZE) {
-		pte_t pteval = *_pte;
+	pte_t pteval;
+	unsigned long _address;
+	spinlock_t *pmd_ptl;
+	bool copy_succeeded = true;
 
-		if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
+	/*
+	 * Copying pages' contents is subject to memory poison at any iteration.
+	 */
+	for (_pte = pte, _address = address;
+			_pte < pte + HPAGE_PMD_NR;
+			_pte++, page++, _address += PAGE_SIZE) {
+		pteval = *_pte;
+
+		if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval)))
 			clear_user_highpage(page, address);
-			add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
-			if (is_zero_pfn(pte_pfn(pteval))) {
-				/*
-				 * ptl mostly unnecessary.
-				 */
-				spin_lock(ptl);
-				ptep_clear(vma->vm_mm, address, _pte);
-				spin_unlock(ptl);
+		else {
+			src_page = pte_page(pteval);
+			if (copy_highpage_mc(page, src_page)) {
+				copy_succeeded = false;
+				break;
+			}
+		}
+	}
+
+	if (!copy_succeeded) {
+		/*
+		 * Copying failed, re-establish the regular PMD that points to
+		 * the regular page table. Restoring PMD needs to be done prior
+		 * to releasing pages. Since pages are still isolated and locked
+		 * here, acquiring anon_vma_lock_write is unnecessary.
+		 */
+		pmd_ptl = pmd_lock(vma->vm_mm, pmd);
+		pmd_populate(vma->vm_mm, pmd, pmd_pgtable(rollback));
+		spin_unlock(pmd_ptl);
+	}
+
+	for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR;
+			_pte++, _address += PAGE_SIZE) {
+		pteval = *_pte;
+		if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
+			if (copy_succeeded) {
+				add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
+				if (is_zero_pfn(pte_pfn(pteval))) {
+					/*
+					 * ptl mostly unnecessary.
+					 */
+					spin_lock(pte_ptl);
+					pte_clear(vma->vm_mm, _address, _pte);
+					spin_unlock(pte_ptl);
+				}
 			}
 		} else {
 			src_page = pte_page(pteval);
-			copy_user_highpage(page, src_page, address, vma);
 			if (!PageCompound(src_page))
 				release_pte_page(src_page);
-			/*
-			 * ptl mostly unnecessary, but preempt has to
-			 * be disabled to update the per-cpu stats
-			 * inside page_remove_rmap().
-			 */
-			spin_lock(ptl);
-			ptep_clear(vma->vm_mm, address, _pte);
-			page_remove_rmap(src_page, false);
-			spin_unlock(ptl);
-			free_page_and_swap_cache(src_page);
+
+			if (copy_succeeded) {
+				/*
+				 * ptl mostly unnecessary, but preempt has to
+				 * be disabled to update the per-cpu stats
+				 * inside page_remove_rmap().
+				 */
+				spin_lock(pte_ptl);
+				pte_clear(vma->vm_mm, _address, _pte);
+				page_remove_rmap(src_page, false);
+				spin_unlock(pte_ptl);
+				free_page_and_swap_cache(src_page);
+			}
 		}
 	}
 
@@ -784,6 +840,8 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
 		list_del(&src_page->lru);
 		release_pte_page(src_page);
 	}
+
+	return copy_succeeded;
 }
 
 static void khugepaged_alloc_sleep(void)
@@ -1066,6 +1124,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 	struct vm_area_struct *vma;
 	struct mmu_notifier_range range;
 	gfp_t gfp;
+	bool copied = false;
 
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
 
@@ -1177,9 +1236,13 @@ static void collapse_huge_page(struct mm_struct *mm,
 	 */
 	anon_vma_unlock_write(vma->anon_vma);
 
-	__collapse_huge_page_copy(pte, new_page, vma, address, pte_ptl,
-			&compound_pagelist);
+	copied = __collapse_huge_page_copy(pte, new_page, pmd, _pmd,
+			vma, address, pte_ptl, &compound_pagelist);
 	pte_unmap(pte);
+	if (!copied) {
+		result = SCAN_COPY_MC;
+		goto out_up_write;
+	}
 	/*
 	 * spin_lock() below is not the equivalent of smp_wmb(), but
 	 * the smp_wmb() inside __SetPageUptodate() can be reused to
@@ -2168,6 +2231,13 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
 				khugepaged_scan_file(mm, file, pgoff, hpage);
 				fput(file);
 			} else {
+				/*
+				 * mmap_read_lock is
+				 * 1) still held if scan failed;
+				 * 2) released if scan succeeded.
+				 * It is not affected by collapsing or copying
+				 * operations.
+				 */
 				ret = khugepaged_scan_pmd(mm, vma,
 						khugepaged_scan.address,
 						hpage);
-- 
2.36.1.124.g0e6072fb45-goog



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v4 2/2] mm: khugepaged: recover from poisoned file-backed memory
  2022-05-27 19:07 [PATCH v4 0/2] Memory poison recovery in khugepaged collapsing Jiaqi Yan
  2022-05-27 19:07 ` [PATCH v4 1/2] mm: khugepaged: recover from poisoned anonymous memory Jiaqi Yan
@ 2022-05-27 19:07 ` Jiaqi Yan
  2022-05-27 20:21   ` Yang Shi
  1 sibling, 1 reply; 5+ messages in thread
From: Jiaqi Yan @ 2022-05-27 19:07 UTC (permalink / raw)
  To: shy828301, tongtiangen
  Cc: tony.luck, naoya.horiguchi, kirill.shutemov, linmiaohe, juew,
	jiaqiyan, linux-mm

Make collapse_file roll back when copying pages failed.
More concretely:
* extract copying operations into a separate loop
* postpone the updates for nr_none until both scanning and
  copying succeeded
* postpone joining small xarray entries until both scanning and
  copying succeeded
* postpone the update operations to NR_XXX_THPS
* for non-SHMEM file, roll back filemap_nr_thps_inc if scan
  succeeded but copying failed

Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
---
 mm/khugepaged.c | 73 ++++++++++++++++++++++++++++++-------------------
 1 file changed, 45 insertions(+), 28 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 0dd28ecc915d1..1ea485c315756 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1706,7 +1706,7 @@ static void collapse_file(struct mm_struct *mm,
 {
 	struct address_space *mapping = file->f_mapping;
 	gfp_t gfp;
-	struct page *new_page;
+	struct page *new_page, *page, *tmp;
 	pgoff_t index, end = start + HPAGE_PMD_NR;
 	LIST_HEAD(pagelist);
 	XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER);
@@ -1762,7 +1762,7 @@ static void collapse_file(struct mm_struct *mm,
 
 	xas_set(&xas, start);
 	for (index = start; index < end; index++) {
-		struct page *page = xas_next(&xas);
+		page = xas_next(&xas);
 
 		VM_BUG_ON(index != xas.xa_index);
 		if (is_shmem) {
@@ -1934,10 +1934,7 @@ static void collapse_file(struct mm_struct *mm,
 	}
 	nr = thp_nr_pages(new_page);
 
-	if (is_shmem)
-		__mod_lruvec_page_state(new_page, NR_SHMEM_THPS, nr);
-	else {
-		__mod_lruvec_page_state(new_page, NR_FILE_THPS, nr);
+	if (!is_shmem) {
 		filemap_nr_thps_inc(mapping);
 		/*
 		 * Paired with smp_mb() in do_dentry_open() to ensure
@@ -1948,40 +1945,44 @@ static void collapse_file(struct mm_struct *mm,
 		smp_mb();
 		if (inode_is_open_for_write(mapping->host)) {
 			result = SCAN_FAIL;
-			__mod_lruvec_page_state(new_page, NR_FILE_THPS, -nr);
 			filemap_nr_thps_dec(mapping);
 			goto xa_locked;
 		}
 	}
 
-	if (nr_none) {
-		__mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none);
-		if (is_shmem)
-			__mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
-	}
-
-	/* Join all the small entries into a single multi-index entry */
-	xas_set_order(&xas, start, HPAGE_PMD_ORDER);
-	xas_store(&xas, new_page);
 xa_locked:
 	xas_unlock_irq(&xas);
 xa_unlocked:
 
 	if (result == SCAN_SUCCEED) {
-		struct page *page, *tmp;
-
 		/*
 		 * Replacing old pages with new one has succeeded, now we
-		 * need to copy the content and free the old pages.
+		 * attempt to copy the contents.
 		 */
 		index = start;
-		list_for_each_entry_safe(page, tmp, &pagelist, lru) {
+		list_for_each_entry(page, &pagelist, lru) {
 			while (index < page->index) {
 				clear_highpage(new_page + (index % HPAGE_PMD_NR));
 				index++;
 			}
-			copy_highpage(new_page + (page->index % HPAGE_PMD_NR),
-					page);
+			if (copy_highpage_mc(new_page + (page->index % HPAGE_PMD_NR), page)) {
+				result = SCAN_COPY_MC;
+				break;
+			}
+			index++;
+		}
+		while (result == SCAN_SUCCEED && index < end) {
+			clear_highpage(new_page + (page->index % HPAGE_PMD_NR));
+			index++;
+		}
+	}
+
+	if (result == SCAN_SUCCEED) {
+		/*
+		 * Copying old pages to huge one has succeeded, now we
+		 * need to free the old pages.
+		 */
+		list_for_each_entry_safe(page, tmp, &pagelist, lru) {
 			list_del(&page->lru);
 			page->mapping = NULL;
 			page_ref_unfreeze(page, 1);
@@ -1989,12 +1990,23 @@ static void collapse_file(struct mm_struct *mm,
 			ClearPageUnevictable(page);
 			unlock_page(page);
 			put_page(page);
-			index++;
 		}
-		while (index < end) {
-			clear_highpage(new_page + (index % HPAGE_PMD_NR));
-			index++;
+
+		xas_lock_irq(&xas);
+		if (is_shmem)
+			__mod_lruvec_page_state(new_page, NR_SHMEM_THPS, nr);
+		else
+			__mod_lruvec_page_state(new_page, NR_FILE_THPS, nr);
+
+		if (nr_none) {
+			__mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none);
+			if (is_shmem)
+				__mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
 		}
+		/* Join all the small entries into a single multi-index entry. */
+		xas_set_order(&xas, start, HPAGE_PMD_ORDER);
+		xas_store(&xas, new_page);
+		xas_unlock_irq(&xas);
 
 		SetPageUptodate(new_page);
 		page_ref_add(new_page, HPAGE_PMD_NR - 1);
@@ -2010,8 +2022,6 @@ static void collapse_file(struct mm_struct *mm,
 
 		khugepaged_pages_collapsed++;
 	} else {
-		struct page *page;
-
 		/* Something went wrong: roll back page cache changes */
 		xas_lock_irq(&xas);
 		mapping->nrpages -= nr_none;
@@ -2045,6 +2055,13 @@ static void collapse_file(struct mm_struct *mm,
 			xas_lock_irq(&xas);
 		}
 		VM_BUG_ON(nr_none);
+		/*
+		 * Undo the updates of filemap_nr_thps_inc for non-SHMEM file only.
+		 * This undo is not needed unless failure is due to SCAN_COPY_MC.
+		 */
+		if (!is_shmem && result == SCAN_COPY_MC)
+			filemap_nr_thps_dec(mapping);
+
 		xas_unlock_irq(&xas);
 
 		new_page->mapping = NULL;
-- 
2.36.1.124.g0e6072fb45-goog



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v4 1/2] mm: khugepaged: recover from poisoned anonymous memory
  2022-05-27 19:07 ` [PATCH v4 1/2] mm: khugepaged: recover from poisoned anonymous memory Jiaqi Yan
@ 2022-05-27 20:21   ` Yang Shi
  0 siblings, 0 replies; 5+ messages in thread
From: Yang Shi @ 2022-05-27 20:21 UTC (permalink / raw)
  To: Jiaqi Yan
  Cc: Tong Tiangen, Tony Luck,
	HORIGUCHI NAOYA(堀口 直也),
	Kirill A. Shutemov, Miaohe Lin, Jue Wang, Linux MM

On Fri, May 27, 2022 at 12:07 PM Jiaqi Yan <jiaqiyan@google.com> wrote:
>
> Make __collapse_huge_page_copy return whether
> collapsing/copying anonymous pages succeeded,
> and make collapse_huge_page handle the return status.
>
> Break existing PTE scan loop into two for-loops.
> The first loop copies source pages into target huge page,
> and can fail gracefully when running into memory errors in
> source pages. If copying all pages succeeds, the second loop
> releases and clears up these normal pages.
> Otherwise, the second loop does the following to
> roll back the page table and page states:
> 1) re-establish the original PTEs-to-PMD connection.
> 2) release source pages back to their LRU list.
>
> Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>

Thanks for your patience.

Reviewed-by: Yang Shi <shy828301@gmail.com>

> ---
>  include/linux/highmem.h            |  19 +++++
>  include/trace/events/huge_memory.h |   3 +-
>  mm/khugepaged.c                    | 130 ++++++++++++++++++++++-------
>  3 files changed, 121 insertions(+), 31 deletions(-)
>
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index 39bb9b47fa9cd..0ccb1e92c4b06 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -298,6 +298,25 @@ static inline void copy_highpage(struct page *to, struct page *from)
>
>  #endif
>
> +/*
> + * Machine check exception handled version of copy_highpage.
> + * Return true if copying page content failed; otherwise false.
> + * Note handling #MC requires arch opt-in.
> + */
> +static inline bool copy_highpage_mc(struct page *to, struct page *from)
> +{
> +       char *vfrom, *vto;
> +       unsigned long ret;
> +
> +       vfrom = kmap_local_page(from);
> +       vto = kmap_local_page(to);
> +       ret = copy_mc_to_kernel(vto, vfrom, PAGE_SIZE);
> +       kunmap_local(vto);
> +       kunmap_local(vfrom);
> +
> +       return ret > 0;
> +}
> +
>  static inline void memcpy_page(struct page *dst_page, size_t dst_off,
>                                struct page *src_page, size_t src_off,
>                                size_t len)
> diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
> index 4fdb14a81108b..f08687046ce41 100644
> --- a/include/trace/events/huge_memory.h
> +++ b/include/trace/events/huge_memory.h
> @@ -34,7 +34,8 @@
>         EM( SCAN_ALLOC_HUGE_PAGE_FAIL,  "alloc_huge_page_failed")       \
>         EM( SCAN_CGROUP_CHARGE_FAIL,    "ccgroup_charge_failed")        \
>         EM( SCAN_TRUNCATED,             "truncated")                    \
> -       EMe(SCAN_PAGE_HAS_PRIVATE,      "page_has_private")             \
> +       EM( SCAN_PAGE_HAS_PRIVATE,      "page_has_private")             \
> +       EMe(SCAN_COPY_MC,               "copy_poisoned_page")           \
>
>  #undef EM
>  #undef EMe
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 131492fd1148b..0dd28ecc915d1 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -52,6 +52,7 @@ enum scan_result {
>         SCAN_CGROUP_CHARGE_FAIL,
>         SCAN_TRUNCATED,
>         SCAN_PAGE_HAS_PRIVATE,
> +       SCAN_COPY_MC,
>  };
>
>  #define CREATE_TRACE_POINTS
> @@ -739,44 +740,99 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>         return 0;
>  }
>
> -static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
> -                                     struct vm_area_struct *vma,
> -                                     unsigned long address,
> -                                     spinlock_t *ptl,
> -                                     struct list_head *compound_pagelist)
> +/*
> + * __collapse_huge_page_copy - attempts to copy memory contents from normal
> + * pages to a hugepage. Cleanup the normal pages if copying succeeds;
> + * otherwise restore the original page table and release isolated normal pages.
> + * Returns true if copying succeeds, otherwise returns false.
> + *
> + * @pte: starting of the PTEs to copy from
> + * @page: the new hugepage to copy contents to
> + * @pmd: pointer to the new hugepage's PMD
> + * @rollback: the original normal pages' PMD
> + * @address: starting address to copy
> + * @pte_ptl: lock on normal pages' PTEs
> + * @compound_pagelist: list that stores compound pages
> + */
> +static bool __collapse_huge_page_copy(pte_t *pte,
> +                               struct page *page,
> +                               pmd_t *pmd,
> +                               pmd_t rollback,
> +                               struct vm_area_struct *vma,
> +                               unsigned long address,
> +                               spinlock_t *pte_ptl,
> +                               struct list_head *compound_pagelist)
>  {
>         struct page *src_page, *tmp;
>         pte_t *_pte;
> -       for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
> -                               _pte++, page++, address += PAGE_SIZE) {
> -               pte_t pteval = *_pte;
> +       pte_t pteval;
> +       unsigned long _address;
> +       spinlock_t *pmd_ptl;
> +       bool copy_succeeded = true;
>
> -               if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> +       /*
> +        * Copying pages' contents is subject to memory poison at any iteration.
> +        */
> +       for (_pte = pte, _address = address;
> +                       _pte < pte + HPAGE_PMD_NR;
> +                       _pte++, page++, _address += PAGE_SIZE) {
> +               pteval = *_pte;
> +
> +               if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval)))
>                         clear_user_highpage(page, address);
> -                       add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
> -                       if (is_zero_pfn(pte_pfn(pteval))) {
> -                               /*
> -                                * ptl mostly unnecessary.
> -                                */
> -                               spin_lock(ptl);
> -                               ptep_clear(vma->vm_mm, address, _pte);
> -                               spin_unlock(ptl);
> +               else {
> +                       src_page = pte_page(pteval);
> +                       if (copy_highpage_mc(page, src_page)) {
> +                               copy_succeeded = false;
> +                               break;
> +                       }
> +               }
> +       }
> +
> +       if (!copy_succeeded) {
> +               /*
> +                * Copying failed, re-establish the regular PMD that points to
> +                * the regular page table. Restoring PMD needs to be done prior
> +                * to releasing pages. Since pages are still isolated and locked
> +                * here, acquiring anon_vma_lock_write is unnecessary.
> +                */
> +               pmd_ptl = pmd_lock(vma->vm_mm, pmd);
> +               pmd_populate(vma->vm_mm, pmd, pmd_pgtable(rollback));
> +               spin_unlock(pmd_ptl);
> +       }
> +
> +       for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR;
> +                       _pte++, _address += PAGE_SIZE) {
> +               pteval = *_pte;
> +               if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> +                       if (copy_succeeded) {
> +                               add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
> +                               if (is_zero_pfn(pte_pfn(pteval))) {
> +                                       /*
> +                                        * ptl mostly unnecessary.
> +                                        */
> +                                       spin_lock(pte_ptl);
> +                                       pte_clear(vma->vm_mm, _address, _pte);
> +                                       spin_unlock(pte_ptl);
> +                               }
>                         }
>                 } else {
>                         src_page = pte_page(pteval);
> -                       copy_user_highpage(page, src_page, address, vma);
>                         if (!PageCompound(src_page))
>                                 release_pte_page(src_page);
> -                       /*
> -                        * ptl mostly unnecessary, but preempt has to
> -                        * be disabled to update the per-cpu stats
> -                        * inside page_remove_rmap().
> -                        */
> -                       spin_lock(ptl);
> -                       ptep_clear(vma->vm_mm, address, _pte);
> -                       page_remove_rmap(src_page, false);
> -                       spin_unlock(ptl);
> -                       free_page_and_swap_cache(src_page);
> +
> +                       if (copy_succeeded) {
> +                               /*
> +                                * ptl mostly unnecessary, but preempt has to
> +                                * be disabled to update the per-cpu stats
> +                                * inside page_remove_rmap().
> +                                */
> +                               spin_lock(pte_ptl);
> +                               pte_clear(vma->vm_mm, _address, _pte);
> +                               page_remove_rmap(src_page, false);
> +                               spin_unlock(pte_ptl);
> +                               free_page_and_swap_cache(src_page);
> +                       }
>                 }
>         }
>
> @@ -784,6 +840,8 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
>                 list_del(&src_page->lru);
>                 release_pte_page(src_page);
>         }
> +
> +       return copy_succeeded;
>  }
>
>  static void khugepaged_alloc_sleep(void)
> @@ -1066,6 +1124,7 @@ static void collapse_huge_page(struct mm_struct *mm,
>         struct vm_area_struct *vma;
>         struct mmu_notifier_range range;
>         gfp_t gfp;
> +       bool copied = false;
>
>         VM_BUG_ON(address & ~HPAGE_PMD_MASK);
>
> @@ -1177,9 +1236,13 @@ static void collapse_huge_page(struct mm_struct *mm,
>          */
>         anon_vma_unlock_write(vma->anon_vma);
>
> -       __collapse_huge_page_copy(pte, new_page, vma, address, pte_ptl,
> -                       &compound_pagelist);
> +       copied = __collapse_huge_page_copy(pte, new_page, pmd, _pmd,
> +                       vma, address, pte_ptl, &compound_pagelist);
>         pte_unmap(pte);
> +       if (!copied) {
> +               result = SCAN_COPY_MC;
> +               goto out_up_write;
> +       }
>         /*
>          * spin_lock() below is not the equivalent of smp_wmb(), but
>          * the smp_wmb() inside __SetPageUptodate() can be reused to
> @@ -2168,6 +2231,13 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
>                                 khugepaged_scan_file(mm, file, pgoff, hpage);
>                                 fput(file);
>                         } else {
> +                               /*
> +                                * mmap_read_lock is
> +                                * 1) still held if scan failed;
> +                                * 2) released if scan succeeded.
> +                                * It is not affected by collapsing or copying
> +                                * operations.
> +                                */
>                                 ret = khugepaged_scan_pmd(mm, vma,
>                                                 khugepaged_scan.address,
>                                                 hpage);
> --
> 2.36.1.124.g0e6072fb45-goog
>


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v4 2/2] mm: khugepaged: recover from poisoned file-backed memory
  2022-05-27 19:07 ` [PATCH v4 2/2] mm: khugepaged: recover from poisoned file-backed memory Jiaqi Yan
@ 2022-05-27 20:21   ` Yang Shi
  0 siblings, 0 replies; 5+ messages in thread
From: Yang Shi @ 2022-05-27 20:21 UTC (permalink / raw)
  To: Jiaqi Yan
  Cc: Tong Tiangen, Tony Luck,
	HORIGUCHI NAOYA(堀口 直也),
	Kirill A. Shutemov, Miaohe Lin, Jue Wang, Linux MM

On Fri, May 27, 2022 at 12:07 PM Jiaqi Yan <jiaqiyan@google.com> wrote:
>
> Make collapse_file roll back when copying pages failed.
> More concretely:
> * extract copying operations into a separate loop
> * postpone the updates for nr_none until both scanning and
>   copying succeeded
> * postpone joining small xarray entries until both scanning and
>   copying succeeded
> * postpone the update operations to NR_XXX_THPS
> * for non-SHMEM file, roll back filemap_nr_thps_inc if scan
>   succeeded but copying failed
>
> Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>

Reviewed-by: Yang Shi <shy828301@gmail.com>

> ---
>  mm/khugepaged.c | 73 ++++++++++++++++++++++++++++++-------------------
>  1 file changed, 45 insertions(+), 28 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 0dd28ecc915d1..1ea485c315756 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1706,7 +1706,7 @@ static void collapse_file(struct mm_struct *mm,
>  {
>         struct address_space *mapping = file->f_mapping;
>         gfp_t gfp;
> -       struct page *new_page;
> +       struct page *new_page, *page, *tmp;
>         pgoff_t index, end = start + HPAGE_PMD_NR;
>         LIST_HEAD(pagelist);
>         XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER);
> @@ -1762,7 +1762,7 @@ static void collapse_file(struct mm_struct *mm,
>
>         xas_set(&xas, start);
>         for (index = start; index < end; index++) {
> -               struct page *page = xas_next(&xas);
> +               page = xas_next(&xas);
>
>                 VM_BUG_ON(index != xas.xa_index);
>                 if (is_shmem) {
> @@ -1934,10 +1934,7 @@ static void collapse_file(struct mm_struct *mm,
>         }
>         nr = thp_nr_pages(new_page);
>
> -       if (is_shmem)
> -               __mod_lruvec_page_state(new_page, NR_SHMEM_THPS, nr);
> -       else {
> -               __mod_lruvec_page_state(new_page, NR_FILE_THPS, nr);
> +       if (!is_shmem) {
>                 filemap_nr_thps_inc(mapping);
>                 /*
>                  * Paired with smp_mb() in do_dentry_open() to ensure
> @@ -1948,40 +1945,44 @@ static void collapse_file(struct mm_struct *mm,
>                 smp_mb();
>                 if (inode_is_open_for_write(mapping->host)) {
>                         result = SCAN_FAIL;
> -                       __mod_lruvec_page_state(new_page, NR_FILE_THPS, -nr);
>                         filemap_nr_thps_dec(mapping);
>                         goto xa_locked;
>                 }
>         }
>
> -       if (nr_none) {
> -               __mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none);
> -               if (is_shmem)
> -                       __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
> -       }
> -
> -       /* Join all the small entries into a single multi-index entry */
> -       xas_set_order(&xas, start, HPAGE_PMD_ORDER);
> -       xas_store(&xas, new_page);
>  xa_locked:
>         xas_unlock_irq(&xas);
>  xa_unlocked:
>
>         if (result == SCAN_SUCCEED) {
> -               struct page *page, *tmp;
> -
>                 /*
>                  * Replacing old pages with new one has succeeded, now we
> -                * need to copy the content and free the old pages.
> +                * attempt to copy the contents.
>                  */
>                 index = start;
> -               list_for_each_entry_safe(page, tmp, &pagelist, lru) {
> +               list_for_each_entry(page, &pagelist, lru) {
>                         while (index < page->index) {
>                                 clear_highpage(new_page + (index % HPAGE_PMD_NR));
>                                 index++;
>                         }
> -                       copy_highpage(new_page + (page->index % HPAGE_PMD_NR),
> -                                       page);
> +                       if (copy_highpage_mc(new_page + (page->index % HPAGE_PMD_NR), page)) {
> +                               result = SCAN_COPY_MC;
> +                               break;
> +                       }
> +                       index++;
> +               }
> +               while (result == SCAN_SUCCEED && index < end) {
> +                       clear_highpage(new_page + (page->index % HPAGE_PMD_NR));
> +                       index++;
> +               }
> +       }
> +
> +       if (result == SCAN_SUCCEED) {
> +               /*
> +                * Copying old pages to huge one has succeeded, now we
> +                * need to free the old pages.
> +                */
> +               list_for_each_entry_safe(page, tmp, &pagelist, lru) {
>                         list_del(&page->lru);
>                         page->mapping = NULL;
>                         page_ref_unfreeze(page, 1);
> @@ -1989,12 +1990,23 @@ static void collapse_file(struct mm_struct *mm,
>                         ClearPageUnevictable(page);
>                         unlock_page(page);
>                         put_page(page);
> -                       index++;
>                 }
> -               while (index < end) {
> -                       clear_highpage(new_page + (index % HPAGE_PMD_NR));
> -                       index++;
> +
> +               xas_lock_irq(&xas);
> +               if (is_shmem)
> +                       __mod_lruvec_page_state(new_page, NR_SHMEM_THPS, nr);
> +               else
> +                       __mod_lruvec_page_state(new_page, NR_FILE_THPS, nr);
> +
> +               if (nr_none) {
> +                       __mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none);
> +                       if (is_shmem)
> +                               __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
>                 }
> +               /* Join all the small entries into a single multi-index entry. */
> +               xas_set_order(&xas, start, HPAGE_PMD_ORDER);
> +               xas_store(&xas, new_page);
> +               xas_unlock_irq(&xas);
>
>                 SetPageUptodate(new_page);
>                 page_ref_add(new_page, HPAGE_PMD_NR - 1);
> @@ -2010,8 +2022,6 @@ static void collapse_file(struct mm_struct *mm,
>
>                 khugepaged_pages_collapsed++;
>         } else {
> -               struct page *page;
> -
>                 /* Something went wrong: roll back page cache changes */
>                 xas_lock_irq(&xas);
>                 mapping->nrpages -= nr_none;
> @@ -2045,6 +2055,13 @@ static void collapse_file(struct mm_struct *mm,
>                         xas_lock_irq(&xas);
>                 }
>                 VM_BUG_ON(nr_none);
> +               /*
> +                * Undo the updates of filemap_nr_thps_inc for non-SHMEM file only.
> +                * This undo is not needed unless failure is due to SCAN_COPY_MC.
> +                */
> +               if (!is_shmem && result == SCAN_COPY_MC)
> +                       filemap_nr_thps_dec(mapping);
> +
>                 xas_unlock_irq(&xas);
>
>                 new_page->mapping = NULL;
> --
> 2.36.1.124.g0e6072fb45-goog
>


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-05-27 20:22 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-27 19:07 [PATCH v4 0/2] Memory poison recovery in khugepaged collapsing Jiaqi Yan
2022-05-27 19:07 ` [PATCH v4 1/2] mm: khugepaged: recover from poisoned anonymous memory Jiaqi Yan
2022-05-27 20:21   ` Yang Shi
2022-05-27 19:07 ` [PATCH v4 2/2] mm: khugepaged: recover from poisoned file-backed memory Jiaqi Yan
2022-05-27 20:21   ` Yang Shi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.