linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC v2 PATCH 0/5] Solve silent data loss caused by poisoned page cache (shmem/tmpfs)
@ 2021-09-23  3:28 Yang Shi
  2021-09-23  3:28 ` [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault Yang Shi
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Yang Shi @ 2021-09-23  3:28 UTC (permalink / raw)
  To: naoya.horiguchi, hughd, kirill.shutemov, willy, peterx, osalvador, akpm
  Cc: shy828301, linux-mm, linux-fsdevel, linux-kernel


When discussing the patch that splits page cache THP in order to offline the
poisoned page, Noaya mentioned there is a bigger problem [1] that prevents this
from working since the page cache page will be truncated if uncorrectable
errors happen.  By looking this deeper it turns out this approach (truncating
poisoned page) may incur silent data loss for all non-readonly filesystems if
the page is dirty.  It may be worse for in-memory filesystem, e.g. shmem/tmpfs
since the data blocks are actually gone.

To solve this problem we could keep the poisoned dirty page in page cache then
notify the users on any later access, e.g. page fault, read/write, etc.  The
clean page could be truncated as is since they can be reread from disk later on.

The consequence is the filesystems may find poisoned page and manipulate it as
healthy page since all the filesystems actually don't check if the page is
poisoned or not in all the relevant paths except page fault.  In general, we
need make the filesystems be aware of poisoned page before we could keep the
poisoned page in page cache in order to solve the data loss problem.

To make filesystems be aware of poisoned page we should consider:
- The page should be not written back: clearing dirty flag could prevent from
  writeback.
- The page should not be dropped (it shows as a clean page) by drop caches or
  other callers: the refcount pin from hwpoison could prevent from invalidating
  (called by cache drop, inode cache shrinking, etc), but it doesn't avoid
  invalidation in DIO path.
- The page should be able to get truncated/hole punched/unlinked: it works as it
  is.
- Notify users when the page is accessed, e.g. read/write, page fault and other
  paths (compression, encryption, etc).

The scope of the last one is huge since almost all filesystems need do it once
a page is returned from page cache lookup.  There are a couple of options to
do it:

1. Check hwpoison flag for every path, the most straightforward way.
2. Return NULL for poisoned page from page cache lookup, the most callsites
   check if NULL is returned, this should have least work I think.  But the
   error handling in filesystems just return -ENOMEM, the error code will incur
   confusion to the users obviously.
3. To improve #2, we could return error pointer, e.g. ERR_PTR(-EIO), but this
   will involve significant amount of code change as well since all the paths
   need check if the pointer is ERR or not just like option #1.

I did prototype for both #1 and #3, but it seems #3 may require more changes
than #1.  For #3 ERR_PTR will be returned so all the callers need to check the
return value otherwise invalid pointer may be dereferenced, but not all callers
really care about the content of the page, for example, partial truncate which
just sets the truncated range in one page to 0.  So for such paths it needs
additional modification if ERR_PTR is returned.  And if the callers have their
own way to handle the problematic pages we need to add a new FGP flag to tell
FGP functions to return the pointer to the page.

It may happen very rarely, but once it happens the consequence (data corruption)
could be very bad and it is very hard to debug.  It seems this problem had been
slightly discussed before, but seems no action was taken at that time. [2]

As the aforementioned investigation, it needs huge amount of work to solve
the potential data loss for all filesystems.  But it is much easier for
in-memory filesystems and such filesystems actually suffer more than others
since even the data blocks are gone due to truncating.  So this patchset starts
from shmem/tmpfs by taking option #1.

Patch #1: fix bugs in page fault and khugepaged.
Patch #2 and #3: refactor, cleanup and preparation.
Patch #4: keep the poisoned page in page cache and handle such case for all
          the paths.
Patch #5: the previous patches unblock page cache THP split, so this patch
          add page cache THP split support.

Changelog v1 --> v2:
  * Incorporated the suggestion from Kirill to use a new page flag to
    indicate there is hwpoisoned subpage(s) in a THP. (patch #1)
  * Dropped patch #2 of v1.
  * Refctored the page refcount check logic of hwpoison per Naoya. (patch #2)
  * Removed unnecessary THP check per Naoya. (patch #3)
  * Incorporated the other comments for shmem from Naoya. (patch #4)


Yang Shi (5):
      mm: filemap: check if THP has hwpoisoned subpage for PMD page fault
      mm: hwpoison: refactor refcount check handling
      mm: hwpoison: remove the unnecessary THP check
      mm: shmem: don't truncate page if memory failure happens
      mm: hwpoison: handle non-anonymous THP correctly

 include/linux/page-flags.h |  19 ++++++++++
 mm/filemap.c               |  15 ++++----
 mm/huge_memory.c           |   2 ++
 mm/memory-failure.c        | 130 ++++++++++++++++++++++++++++++++++++++++++---------------------------
 mm/memory.c                |   9 +++++
 mm/page_alloc.c            |   4 ++-
 mm/shmem.c                 |  31 +++++++++++++++--
 mm/userfaultfd.c           |   5 +++
 8 files changed, 156 insertions(+), 59 deletions(-)


[1] https://lore.kernel.org/linux-mm/CAHbLzkqNPBh_sK09qfr4yu4WTFOzRy+MKj+PA7iG-adzi9zGsg@mail.gmail.com/T/#m0e959283380156f1d064456af01ae51fdff91265
[2] https://lore.kernel.org/lkml/20210318183350.GT3420@casper.infradead.org/


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault
  2021-09-23  3:28 [RFC v2 PATCH 0/5] Solve silent data loss caused by poisoned page cache (shmem/tmpfs) Yang Shi
@ 2021-09-23  3:28 ` Yang Shi
  2021-09-23 14:39   ` Kirill A. Shutemov
  2021-09-23  3:28 ` [v2 PATCH 2/5] mm: hwpoison: refactor refcount check handling Yang Shi
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Yang Shi @ 2021-09-23  3:28 UTC (permalink / raw)
  To: naoya.horiguchi, hughd, kirill.shutemov, willy, peterx, osalvador, akpm
  Cc: shy828301, linux-mm, linux-fsdevel, linux-kernel

When handling shmem page fault the THP with corrupted subpage could be PMD
mapped if certain conditions are satisfied.  But kernel is supposed to
send SIGBUS when trying to map hwpoisoned page.

There are two paths which may do PMD map: fault around and regular fault.

Before commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths")
the thing was even worse in fault around path.  The THP could be PMD mapped as
long as the VMA fits regardless what subpage is accessed and corrupted.  After
this commit as long as head page is not corrupted the THP could be PMD mapped.

In the regulat fault path the THP could be PMD mapped as long as the corrupted
page is not accessed and the VMA fits.

This loophole could be fixed by iterating every subpage to check if any
of them is hwpoisoned or not, but it is somewhat costly in page fault path.

So introduce a new page flag called HasHWPoisoned on the first tail page.  It
indicates the THP has hwpoisoned subpage(s).  It is set if any subpage of THP
is found hwpoisoned by memory failure and cleared when the THP is freed or
split.

Cc: <stable@vger.kernel.org>
Suggested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Yang Shi <shy828301@gmail.com>
---
 include/linux/page-flags.h | 19 +++++++++++++++++++
 mm/filemap.c               | 15 +++++++++------
 mm/huge_memory.c           |  2 ++
 mm/memory-failure.c        |  4 ++++
 mm/memory.c                |  9 +++++++++
 mm/page_alloc.c            |  4 +++-
 6 files changed, 46 insertions(+), 7 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index a558d67ee86f..a357b41b3057 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -171,6 +171,11 @@ enum pageflags {
 	/* Compound pages. Stored in first tail page's flags */
 	PG_double_map = PG_workingset,
 
+#ifdef CONFIG_MEMORY_FAILURE
+	/* Compound pages. Stored in first tail page's flags */
+	PG_has_hwpoisoned = PG_mappedtodisk,
+#endif
+
 	/* non-lru isolated movable page */
 	PG_isolated = PG_reclaim,
 
@@ -668,6 +673,20 @@ PAGEFLAG_FALSE(DoubleMap)
 	TESTSCFLAG_FALSE(DoubleMap)
 #endif
 
+#if defined(CONFIG_MEMORY_FAILURE) && defined(CONFIG_TRANSPARENT_HUGEPAGE)
+/*
+ * PageHasPoisoned indicates that at least on subpage is hwpoisoned in the
+ * compound page.
+ *
+ * This flag is set by hwpoison handler.  Cleared by THP split or free page.
+ */
+PAGEFLAG(HasHWPoisoned, has_hwpoisoned, PF_SECOND)
+	TESTSCFLAG(HasHWPoisoned, has_hwpoisoned, PF_SECOND)
+#else
+PAGEFLAG_FALSE(HasHWPoisoned)
+	TESTSCFLAG_FALSE(HasHWPoisoned)
+#endif
+
 /*
  * Check if a page is currently marked HWPoisoned. Note that this check is
  * best effort only and inherently racy: there is no way to synchronize with
diff --git a/mm/filemap.c b/mm/filemap.c
index dae481293b5d..740b7afe159a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3195,12 +3195,14 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
 	}
 
 	if (pmd_none(*vmf->pmd) && PageTransHuge(page)) {
-	    vm_fault_t ret = do_set_pmd(vmf, page);
-	    if (!ret) {
-		    /* The page is mapped successfully, reference consumed. */
-		    unlock_page(page);
-		    return true;
-	    }
+		vm_fault_t ret = do_set_pmd(vmf, page);
+		if (ret == VM_FAULT_FALLBACK)
+			goto out;
+		if (!ret) {
+			/* The page is mapped successfully, reference consumed. */
+			unlock_page(page);
+			return true;
+		}
 	}
 
 	if (pmd_none(*vmf->pmd)) {
@@ -3220,6 +3222,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
 		return true;
 	}
 
+out:
 	return false;
 }
 
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 5e9ef0fc261e..0574b1613714 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2426,6 +2426,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
 	/* lock lru list/PageCompound, ref frozen by page_ref_freeze */
 	lruvec = lock_page_lruvec(head);
 
+	ClearPageHasHWPoisoned(head);
+
 	for (i = nr - 1; i >= 1; i--) {
 		__split_huge_page_tail(head, i, lruvec, list);
 		/* Some pages can be beyond EOF: drop them from page cache */
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 54879c339024..93ae0ce90ab8 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1663,6 +1663,10 @@ int memory_failure(unsigned long pfn, int flags)
 	}
 
 	orig_head = hpage = compound_head(p);
+
+	if (PageTransHuge(hpage))
+		SetPageHasHWPoisoned(orig_head);
+
 	num_poisoned_pages_inc();
 
 	/*
diff --git a/mm/memory.c b/mm/memory.c
index 25fc46e87214..738f4e1df81e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3905,6 +3905,15 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
 	if (compound_order(page) != HPAGE_PMD_ORDER)
 		return ret;
 
+	/*
+	 * Just backoff if any subpage of a THP is corrupted otherwise
+	 * the corrupted page may mapped by PMD silently to escape the
+	 * check.  This kind of THP just can be PTE mapped.  Access to
+	 * the corrupted subpage should trigger SIGBUS as expected.
+	 */
+	if (unlikely(PageHasHWPoisoned(page)))
+		return ret;
+
 	/*
 	 * Archs like ppc64 need additional space to store information
 	 * related to pte entry. Use the preallocated table for that.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b37435c274cf..7f37652f0287 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1312,8 +1312,10 @@ static __always_inline bool free_pages_prepare(struct page *page,
 
 		VM_BUG_ON_PAGE(compound && compound_order(page) != order, page);
 
-		if (compound)
+		if (compound) {
 			ClearPageDoubleMap(page);
+			ClearPageHasHWPoisoned(page);
+		}
 		for (i = 1; i < (1 << order); i++) {
 			if (compound)
 				bad += free_tail_pages_check(page, page + i);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [v2 PATCH 2/5] mm: hwpoison: refactor refcount check handling
  2021-09-23  3:28 [RFC v2 PATCH 0/5] Solve silent data loss caused by poisoned page cache (shmem/tmpfs) Yang Shi
  2021-09-23  3:28 ` [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault Yang Shi
@ 2021-09-23  3:28 ` Yang Shi
  2021-09-23  3:28 ` [v2 PATCH 3/5] mm: hwpoison: remove the unnecessary THP check Yang Shi
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: Yang Shi @ 2021-09-23  3:28 UTC (permalink / raw)
  To: naoya.horiguchi, hughd, kirill.shutemov, willy, peterx, osalvador, akpm
  Cc: shy828301, linux-mm, linux-fsdevel, linux-kernel

Memory failure will report failure if the page still has extra pinned
refcount other than from hwpoison after the handler is done.  Actually
the check is not necessary for all handlers, so move the check into
specific handlers.  This would make the following keeping shmem page in
page cache patch easier.

Suggested-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Signed-off-by: Yang Shi <shy828301@gmail.com>
---
 mm/memory-failure.c | 93 +++++++++++++++++++++++++++++++--------------
 1 file changed, 64 insertions(+), 29 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 93ae0ce90ab8..7722197b2b9d 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -806,12 +806,44 @@ static int truncate_error_page(struct page *p, unsigned long pfn,
 	return ret;
 }
 
+struct page_state {
+	unsigned long mask;
+	unsigned long res;
+	enum mf_action_page_type type;
+
+	/* Callback ->action() has to unlock the relevant page inside it. */
+	int (*action)(struct page_state *ps, struct page *p);
+};
+
+/*
+ * Return true if page is still referenced by others, otherwise return
+ * false.
+ *
+ * The dec is true when one extra refcount is expected.
+ */
+static bool has_extra_refcount(struct page_state *ps, struct page *p,
+			       bool dec)
+{
+	int count = page_count(p) - 1;
+
+	if (dec)
+		count -= 1;
+
+	if (count > 0) {
+		pr_err("Memory failure: %#lx: %s still referenced by %d users\n",
+		       page_to_pfn(p), action_page_types[ps->type], count);
+		return true;
+	}
+
+	return false;
+}
+
 /*
  * Error hit kernel page.
  * Do nothing, try to be lucky and not touch this instead. For a few cases we
  * could be more sophisticated.
  */
-static int me_kernel(struct page *p, unsigned long pfn)
+static int me_kernel(struct page_state *ps, struct page *p)
 {
 	unlock_page(p);
 	return MF_IGNORED;
@@ -820,9 +852,9 @@ static int me_kernel(struct page *p, unsigned long pfn)
 /*
  * Page in unknown state. Do nothing.
  */
-static int me_unknown(struct page *p, unsigned long pfn)
+static int me_unknown(struct page_state *ps, struct page *p)
 {
-	pr_err("Memory failure: %#lx: Unknown page state\n", pfn);
+	pr_err("Memory failure: %#lx: Unknown page state\n", page_to_pfn(p));
 	unlock_page(p);
 	return MF_FAILED;
 }
@@ -830,7 +862,7 @@ static int me_unknown(struct page *p, unsigned long pfn)
 /*
  * Clean (or cleaned) page cache page.
  */
-static int me_pagecache_clean(struct page *p, unsigned long pfn)
+static int me_pagecache_clean(struct page_state *ps, struct page *p)
 {
 	int ret;
 	struct address_space *mapping;
@@ -867,9 +899,13 @@ static int me_pagecache_clean(struct page *p, unsigned long pfn)
 	 *
 	 * Open: to take i_rwsem or not for this? Right now we don't.
 	 */
-	ret = truncate_error_page(p, pfn, mapping);
+	ret = truncate_error_page(p, page_to_pfn(p), mapping);
 out:
 	unlock_page(p);
+
+	if (has_extra_refcount(ps, p, false))
+		ret = MF_FAILED;
+
 	return ret;
 }
 
@@ -878,7 +914,7 @@ static int me_pagecache_clean(struct page *p, unsigned long pfn)
  * Issues: when the error hit a hole page the error is not properly
  * propagated.
  */
-static int me_pagecache_dirty(struct page *p, unsigned long pfn)
+static int me_pagecache_dirty(struct page_state *ps, struct page *p)
 {
 	struct address_space *mapping = page_mapping(p);
 
@@ -922,7 +958,7 @@ static int me_pagecache_dirty(struct page *p, unsigned long pfn)
 		mapping_set_error(mapping, -EIO);
 	}
 
-	return me_pagecache_clean(p, pfn);
+	return me_pagecache_clean(ps, p);
 }
 
 /*
@@ -944,9 +980,10 @@ static int me_pagecache_dirty(struct page *p, unsigned long pfn)
  * Clean swap cache pages can be directly isolated. A later page fault will
  * bring in the known good data from disk.
  */
-static int me_swapcache_dirty(struct page *p, unsigned long pfn)
+static int me_swapcache_dirty(struct page_state *ps, struct page *p)
 {
 	int ret;
+	bool dec = false;
 
 	ClearPageDirty(p);
 	/* Trigger EIO in shmem: */
@@ -954,10 +991,17 @@ static int me_swapcache_dirty(struct page *p, unsigned long pfn)
 
 	ret = delete_from_lru_cache(p) ? MF_FAILED : MF_DELAYED;
 	unlock_page(p);
+
+	if (ret == MF_DELAYED)
+		dec = true;
+
+	if (has_extra_refcount(ps, p, dec))
+		ret = MF_FAILED;
+
 	return ret;
 }
 
-static int me_swapcache_clean(struct page *p, unsigned long pfn)
+static int me_swapcache_clean(struct page_state *ps, struct page *p)
 {
 	int ret;
 
@@ -965,6 +1009,10 @@ static int me_swapcache_clean(struct page *p, unsigned long pfn)
 
 	ret = delete_from_lru_cache(p) ? MF_FAILED : MF_RECOVERED;
 	unlock_page(p);
+
+	if (has_extra_refcount(ps, p, false))
+		ret = MF_FAILED;
+
 	return ret;
 }
 
@@ -974,7 +1022,7 @@ static int me_swapcache_clean(struct page *p, unsigned long pfn)
  * - Error on hugepage is contained in hugepage unit (not in raw page unit.)
  *   To narrow down kill region to one page, we need to break up pmd.
  */
-static int me_huge_page(struct page *p, unsigned long pfn)
+static int me_huge_page(struct page_state *ps, struct page *p)
 {
 	int res;
 	struct page *hpage = compound_head(p);
@@ -985,7 +1033,7 @@ static int me_huge_page(struct page *p, unsigned long pfn)
 
 	mapping = page_mapping(hpage);
 	if (mapping) {
-		res = truncate_error_page(hpage, pfn, mapping);
+		res = truncate_error_page(hpage, page_to_pfn(p), mapping);
 		unlock_page(hpage);
 	} else {
 		res = MF_FAILED;
@@ -1003,6 +1051,9 @@ static int me_huge_page(struct page *p, unsigned long pfn)
 		}
 	}
 
+	if (has_extra_refcount(ps, p, false))
+		res = MF_FAILED;
+
 	return res;
 }
 
@@ -1028,14 +1079,7 @@ static int me_huge_page(struct page *p, unsigned long pfn)
 #define slab		(1UL << PG_slab)
 #define reserved	(1UL << PG_reserved)
 
-static struct page_state {
-	unsigned long mask;
-	unsigned long res;
-	enum mf_action_page_type type;
-
-	/* Callback ->action() has to unlock the relevant page inside it. */
-	int (*action)(struct page *p, unsigned long pfn);
-} error_states[] = {
+static struct page_state error_states[] = {
 	{ reserved,	reserved,	MF_MSG_KERNEL,	me_kernel },
 	/*
 	 * free pages are specially detected outside this table:
@@ -1095,19 +1139,10 @@ static int page_action(struct page_state *ps, struct page *p,
 			unsigned long pfn)
 {
 	int result;
-	int count;
 
 	/* page p should be unlocked after returning from ps->action().  */
-	result = ps->action(p, pfn);
+	result = ps->action(ps, p);
 
-	count = page_count(p) - 1;
-	if (ps->action == me_swapcache_dirty && result == MF_DELAYED)
-		count--;
-	if (count > 0) {
-		pr_err("Memory failure: %#lx: %s still referenced by %d users\n",
-		       pfn, action_page_types[ps->type], count);
-		result = MF_FAILED;
-	}
 	action_result(pfn, ps->type, result);
 
 	/* Could do more checks here if page looks ok */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [v2 PATCH 3/5] mm: hwpoison: remove the unnecessary THP check
  2021-09-23  3:28 [RFC v2 PATCH 0/5] Solve silent data loss caused by poisoned page cache (shmem/tmpfs) Yang Shi
  2021-09-23  3:28 ` [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault Yang Shi
  2021-09-23  3:28 ` [v2 PATCH 2/5] mm: hwpoison: refactor refcount check handling Yang Shi
@ 2021-09-23  3:28 ` Yang Shi
  2021-09-23  3:28 ` [v2 PATCH 4/5] mm: shmem: don't truncate page if memory failure happens Yang Shi
  2021-09-23  3:28 ` [v2 PATCH 5/5] mm: hwpoison: handle non-anonymous THP correctly Yang Shi
  4 siblings, 0 replies; 11+ messages in thread
From: Yang Shi @ 2021-09-23  3:28 UTC (permalink / raw)
  To: naoya.horiguchi, hughd, kirill.shutemov, willy, peterx, osalvador, akpm
  Cc: shy828301, linux-mm, linux-fsdevel, linux-kernel

When handling THP hwpoison checked if the THP is in allocation or free
stage since hwpoison may mistreat it as hugetlb page.  After
commit 415c64c1453a ("mm/memory-failure: split thp earlier in memory error
handling") the problem has been fixed, so this check is no longer
needed.  Remove it.  The side effect of the removal is hwpoison may
report unsplit THP instead of unknown error for shmem THP.  It seems not
like a big deal.

Suggested-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Signed-off-by: Yang Shi <shy828301@gmail.com>
---
 mm/memory-failure.c | 14 --------------
 1 file changed, 14 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 7722197b2b9d..5c7f1c2aabd9 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1182,20 +1182,6 @@ static int __get_hwpoison_page(struct page *page)
 	if (!HWPoisonHandlable(head))
 		return -EBUSY;
 
-	if (PageTransHuge(head)) {
-		/*
-		 * Non anonymous thp exists only in allocation/free time. We
-		 * can't handle such a case correctly, so let's give it up.
-		 * This should be better than triggering BUG_ON when kernel
-		 * tries to touch the "partially handled" page.
-		 */
-		if (!PageAnon(head)) {
-			pr_err("Memory failure: %#lx: non anonymous thp\n",
-				page_to_pfn(page));
-			return 0;
-		}
-	}
-
 	if (get_page_unless_zero(head)) {
 		if (head == compound_head(page))
 			return 1;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [v2 PATCH 4/5] mm: shmem: don't truncate page if memory failure happens
  2021-09-23  3:28 [RFC v2 PATCH 0/5] Solve silent data loss caused by poisoned page cache (shmem/tmpfs) Yang Shi
                   ` (2 preceding siblings ...)
  2021-09-23  3:28 ` [v2 PATCH 3/5] mm: hwpoison: remove the unnecessary THP check Yang Shi
@ 2021-09-23  3:28 ` Yang Shi
  2021-09-23  3:28 ` [v2 PATCH 5/5] mm: hwpoison: handle non-anonymous THP correctly Yang Shi
  4 siblings, 0 replies; 11+ messages in thread
From: Yang Shi @ 2021-09-23  3:28 UTC (permalink / raw)
  To: naoya.horiguchi, hughd, kirill.shutemov, willy, peterx, osalvador, akpm
  Cc: shy828301, linux-mm, linux-fsdevel, linux-kernel

The current behavior of memory failure is to truncate the page cache
regardless of dirty or clean.  If the page is dirty the later access
will get the obsolete data from disk without any notification to the
users.  This may cause silent data loss.  It is even worse for shmem
since shmem is in-memory filesystem, truncating page cache means
discarding data blocks.  The later read would return all zero.

The right approach is to keep the corrupted page in page cache, any
later access would return error for syscalls or SIGBUS for page fault,
until the file is truncated, hole punched or removed.  The regular
storage backed filesystems would be more complicated so this patch
is focused on shmem.  This also unblock the support for soft
offlining shmem THP.

Signed-off-by: Yang Shi <shy828301@gmail.com>
---
 mm/memory-failure.c | 10 +++++++++-
 mm/shmem.c          | 31 +++++++++++++++++++++++++++++--
 mm/userfaultfd.c    |  5 +++++
 3 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 5c7f1c2aabd9..3824bc708e55 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -57,6 +57,7 @@
 #include <linux/ratelimit.h>
 #include <linux/page-isolation.h>
 #include <linux/pagewalk.h>
+#include <linux/shmem_fs.h>
 #include "internal.h"
 #include "ras/ras_event.h"
 
@@ -866,6 +867,7 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p)
 {
 	int ret;
 	struct address_space *mapping;
+	bool dec;
 
 	delete_from_lru_cache(p);
 
@@ -894,6 +896,12 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p)
 		goto out;
 	}
 
+	/*
+	 * The shmem page is kept in page cache instead of truncating
+	 * so need decrement the refcount from page cache.
+	 */
+	dec = shmem_mapping(mapping);
+
 	/*
 	 * Truncation is a bit tricky. Enable it per file system for now.
 	 *
@@ -903,7 +911,7 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p)
 out:
 	unlock_page(p);
 
-	if (has_extra_refcount(ps, p, false))
+	if (has_extra_refcount(ps, p, dec))
 		ret = MF_FAILED;
 
 	return ret;
diff --git a/mm/shmem.c b/mm/shmem.c
index 88742953532c..75c36b6a405a 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2456,6 +2456,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping,
 	struct inode *inode = mapping->host;
 	struct shmem_inode_info *info = SHMEM_I(inode);
 	pgoff_t index = pos >> PAGE_SHIFT;
+	int ret = 0;
 
 	/* i_rwsem is held by caller */
 	if (unlikely(info->seals & (F_SEAL_GROW |
@@ -2466,7 +2467,17 @@ shmem_write_begin(struct file *file, struct address_space *mapping,
 			return -EPERM;
 	}
 
-	return shmem_getpage(inode, index, pagep, SGP_WRITE);
+	ret = shmem_getpage(inode, index, pagep, SGP_WRITE);
+
+	if (*pagep) {
+		if (PageHWPoison(*pagep)) {
+			unlock_page(*pagep);
+			put_page(*pagep);
+			ret = -EIO;
+		}
+	}
+
+	return ret;
 }
 
 static int
@@ -2555,6 +2566,11 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
 			unlock_page(page);
 		}
 
+		if (page && PageHWPoison(page)) {
+			error = -EIO;
+			break;
+		}
+
 		/*
 		 * We must evaluate after, since reads (unlike writes)
 		 * are called without i_rwsem protection against truncate
@@ -3772,6 +3788,13 @@ static void shmem_destroy_inodecache(void)
 	kmem_cache_destroy(shmem_inode_cachep);
 }
 
+/* Keep the page in page cache instead of truncating it */
+static int shmem_error_remove_page(struct address_space *mapping,
+				   struct page *page)
+{
+	return 0;
+}
+
 const struct address_space_operations shmem_aops = {
 	.writepage	= shmem_writepage,
 	.set_page_dirty	= __set_page_dirty_no_writeback,
@@ -3782,7 +3805,7 @@ const struct address_space_operations shmem_aops = {
 #ifdef CONFIG_MIGRATION
 	.migratepage	= migrate_page,
 #endif
-	.error_remove_page = generic_error_remove_page,
+	.error_remove_page = shmem_error_remove_page,
 };
 EXPORT_SYMBOL(shmem_aops);
 
@@ -4193,6 +4216,10 @@ struct page *shmem_read_mapping_page_gfp(struct address_space *mapping,
 		page = ERR_PTR(error);
 	else
 		unlock_page(page);
+
+	if (PageHWPoison(page))
+		page = ERR_PTR(-EIO);
+
 	return page;
 #else
 	/*
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 7a9008415534..b688d5327177 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -233,6 +233,11 @@ static int mcontinue_atomic_pte(struct mm_struct *dst_mm,
 		goto out;
 	}
 
+	if (PageHWPoison(page)) {
+		ret = -EIO;
+		goto out_release;
+	}
+
 	ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr,
 				       page, false, wp_copy);
 	if (ret)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [v2 PATCH 5/5] mm: hwpoison: handle non-anonymous THP correctly
  2021-09-23  3:28 [RFC v2 PATCH 0/5] Solve silent data loss caused by poisoned page cache (shmem/tmpfs) Yang Shi
                   ` (3 preceding siblings ...)
  2021-09-23  3:28 ` [v2 PATCH 4/5] mm: shmem: don't truncate page if memory failure happens Yang Shi
@ 2021-09-23  3:28 ` Yang Shi
  4 siblings, 0 replies; 11+ messages in thread
From: Yang Shi @ 2021-09-23  3:28 UTC (permalink / raw)
  To: naoya.horiguchi, hughd, kirill.shutemov, willy, peterx, osalvador, akpm
  Cc: shy828301, linux-mm, linux-fsdevel, linux-kernel

Currently hwpoison doesn't handle non-anonymous THP, but since v4.8 THP
support for tmpfs and read-only file cache has been added.  They could
be offlined by split THP, just like anonymous THP.

Signed-off-by: Yang Shi <shy828301@gmail.com>
---
 mm/memory-failure.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 3824bc708e55..e60224b3a315 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1443,14 +1443,11 @@ static int identify_page_state(unsigned long pfn, struct page *p,
 static int try_to_split_thp_page(struct page *page, const char *msg)
 {
 	lock_page(page);
-	if (!PageAnon(page) || unlikely(split_huge_page(page))) {
+	if (unlikely(split_huge_page(page))) {
 		unsigned long pfn = page_to_pfn(page);
 
 		unlock_page(page);
-		if (!PageAnon(page))
-			pr_info("%s: %#lx: non anonymous thp\n", msg, pfn);
-		else
-			pr_info("%s: %#lx: thp split failed\n", msg, pfn);
+		pr_info("%s: %#lx: thp split failed\n", msg, pfn);
 		put_page(page);
 		return -EBUSY;
 	}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault
  2021-09-23  3:28 ` [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault Yang Shi
@ 2021-09-23 14:39   ` Kirill A. Shutemov
  2021-09-23 17:15     ` Yang Shi
  0 siblings, 1 reply; 11+ messages in thread
From: Kirill A. Shutemov @ 2021-09-23 14:39 UTC (permalink / raw)
  To: Yang Shi
  Cc: naoya.horiguchi, hughd, kirill.shutemov, willy, peterx,
	osalvador, akpm, linux-mm, linux-fsdevel, linux-kernel

On Wed, Sep 22, 2021 at 08:28:26PM -0700, Yang Shi wrote:
> When handling shmem page fault the THP with corrupted subpage could be PMD
> mapped if certain conditions are satisfied.  But kernel is supposed to
> send SIGBUS when trying to map hwpoisoned page.
> 
> There are two paths which may do PMD map: fault around and regular fault.
> 
> Before commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths")
> the thing was even worse in fault around path.  The THP could be PMD mapped as
> long as the VMA fits regardless what subpage is accessed and corrupted.  After
> this commit as long as head page is not corrupted the THP could be PMD mapped.
> 
> In the regulat fault path the THP could be PMD mapped as long as the corrupted

s/regulat/regular/

> page is not accessed and the VMA fits.
> 
> This loophole could be fixed by iterating every subpage to check if any
> of them is hwpoisoned or not, but it is somewhat costly in page fault path.
> 
> So introduce a new page flag called HasHWPoisoned on the first tail page.  It
> indicates the THP has hwpoisoned subpage(s).  It is set if any subpage of THP
> is found hwpoisoned by memory failure and cleared when the THP is freed or
> split.
> 
> Cc: <stable@vger.kernel.org>
> Suggested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Signed-off-by: Yang Shi <shy828301@gmail.com>
> ---

...

> diff --git a/mm/filemap.c b/mm/filemap.c
> index dae481293b5d..740b7afe159a 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -3195,12 +3195,14 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
>  	}
>  
>  	if (pmd_none(*vmf->pmd) && PageTransHuge(page)) {
> -	    vm_fault_t ret = do_set_pmd(vmf, page);
> -	    if (!ret) {
> -		    /* The page is mapped successfully, reference consumed. */
> -		    unlock_page(page);
> -		    return true;
> -	    }
> +		vm_fault_t ret = do_set_pmd(vmf, page);
> +		if (ret == VM_FAULT_FALLBACK)
> +			goto out;

Hm.. What? I don't get it. Who will establish page table in the pmd then?

> +		if (!ret) {
> +			/* The page is mapped successfully, reference consumed. */
> +			unlock_page(page);
> +			return true;
> +		}
>  	}
>  
>  	if (pmd_none(*vmf->pmd)) {
> @@ -3220,6 +3222,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
>  		return true;
>  	}
>  
> +out:
>  	return false;
>  }
>  
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 5e9ef0fc261e..0574b1613714 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2426,6 +2426,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
>  	/* lock lru list/PageCompound, ref frozen by page_ref_freeze */
>  	lruvec = lock_page_lruvec(head);
>  
> +	ClearPageHasHWPoisoned(head);
> +

Do we serialize the new flag with lock_page() or what? I mean what
prevents the flag being set again after this point, but before
ClearPageCompound()?

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault
  2021-09-23 14:39   ` Kirill A. Shutemov
@ 2021-09-23 17:15     ` Yang Shi
  2021-09-23 20:39       ` Yang Shi
  0 siblings, 1 reply; 11+ messages in thread
From: Yang Shi @ 2021-09-23 17:15 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: HORIGUCHI NAOYA(堀口 直也),
	Hugh Dickins, Kirill A. Shutemov, Matthew Wilcox, Peter Xu,
	Oscar Salvador, Andrew Morton, Linux MM,
	Linux FS-devel Mailing List, Linux Kernel Mailing List

On Thu, Sep 23, 2021 at 7:39 AM Kirill A. Shutemov <kirill@shutemov.name> wrote:
>
> On Wed, Sep 22, 2021 at 08:28:26PM -0700, Yang Shi wrote:
> > When handling shmem page fault the THP with corrupted subpage could be PMD
> > mapped if certain conditions are satisfied.  But kernel is supposed to
> > send SIGBUS when trying to map hwpoisoned page.
> >
> > There are two paths which may do PMD map: fault around and regular fault.
> >
> > Before commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths")
> > the thing was even worse in fault around path.  The THP could be PMD mapped as
> > long as the VMA fits regardless what subpage is accessed and corrupted.  After
> > this commit as long as head page is not corrupted the THP could be PMD mapped.
> >
> > In the regulat fault path the THP could be PMD mapped as long as the corrupted
>
> s/regulat/regular/
>
> > page is not accessed and the VMA fits.
> >
> > This loophole could be fixed by iterating every subpage to check if any
> > of them is hwpoisoned or not, but it is somewhat costly in page fault path.
> >
> > So introduce a new page flag called HasHWPoisoned on the first tail page.  It
> > indicates the THP has hwpoisoned subpage(s).  It is set if any subpage of THP
> > is found hwpoisoned by memory failure and cleared when the THP is freed or
> > split.
> >
> > Cc: <stable@vger.kernel.org>
> > Suggested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > Signed-off-by: Yang Shi <shy828301@gmail.com>
> > ---
>
> ...
>
> > diff --git a/mm/filemap.c b/mm/filemap.c
> > index dae481293b5d..740b7afe159a 100644
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -3195,12 +3195,14 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
> >       }
> >
> >       if (pmd_none(*vmf->pmd) && PageTransHuge(page)) {
> > -         vm_fault_t ret = do_set_pmd(vmf, page);
> > -         if (!ret) {
> > -                 /* The page is mapped successfully, reference consumed. */
> > -                 unlock_page(page);
> > -                 return true;
> > -         }
> > +             vm_fault_t ret = do_set_pmd(vmf, page);
> > +             if (ret == VM_FAULT_FALLBACK)
> > +                     goto out;
>
> Hm.. What? I don't get it. Who will establish page table in the pmd then?

Aha, yeah. It should jump to the below PMD populate section. Will fix
it in the next version.

>
> > +             if (!ret) {
> > +                     /* The page is mapped successfully, reference consumed. */
> > +                     unlock_page(page);
> > +                     return true;
> > +             }
> >       }
> >
> >       if (pmd_none(*vmf->pmd)) {
> > @@ -3220,6 +3222,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
> >               return true;
> >       }
> >
> > +out:
> >       return false;
> >  }
> >
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 5e9ef0fc261e..0574b1613714 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -2426,6 +2426,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> >       /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
> >       lruvec = lock_page_lruvec(head);
> >
> > +     ClearPageHasHWPoisoned(head);
> > +
>
> Do we serialize the new flag with lock_page() or what? I mean what
> prevents the flag being set again after this point, but before
> ClearPageCompound()?

No, not in this patch. But I think we could use refcount. THP split
would freeze refcount and the split is guaranteed to succeed after
that point, so refcount can be checked in memory failure. The
SetPageHasHWPoisoned() call could be moved to __get_hwpoison_page()
when get_unless_page_zero() bumps the refcount successfully. If the
refcount is zero it means the THP is under split or being freed, we
don't care about these two cases.

The THP might be mapped before this flag is set, but the process will
be killed later, so it seems fine.

>
> --
>  Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault
  2021-09-23 17:15     ` Yang Shi
@ 2021-09-23 20:39       ` Yang Shi
  2021-09-24  9:26         ` Kirill A. Shutemov
  0 siblings, 1 reply; 11+ messages in thread
From: Yang Shi @ 2021-09-23 20:39 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: HORIGUCHI NAOYA(堀口 直也),
	Hugh Dickins, Kirill A. Shutemov, Matthew Wilcox, Peter Xu,
	Oscar Salvador, Andrew Morton, Linux MM,
	Linux FS-devel Mailing List, Linux Kernel Mailing List

On Thu, Sep 23, 2021 at 10:15 AM Yang Shi <shy828301@gmail.com> wrote:
>
> On Thu, Sep 23, 2021 at 7:39 AM Kirill A. Shutemov <kirill@shutemov.name> wrote:
> >
> > On Wed, Sep 22, 2021 at 08:28:26PM -0700, Yang Shi wrote:
> > > When handling shmem page fault the THP with corrupted subpage could be PMD
> > > mapped if certain conditions are satisfied.  But kernel is supposed to
> > > send SIGBUS when trying to map hwpoisoned page.
> > >
> > > There are two paths which may do PMD map: fault around and regular fault.
> > >
> > > Before commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths")
> > > the thing was even worse in fault around path.  The THP could be PMD mapped as
> > > long as the VMA fits regardless what subpage is accessed and corrupted.  After
> > > this commit as long as head page is not corrupted the THP could be PMD mapped.
> > >
> > > In the regulat fault path the THP could be PMD mapped as long as the corrupted
> >
> > s/regulat/regular/
> >
> > > page is not accessed and the VMA fits.
> > >
> > > This loophole could be fixed by iterating every subpage to check if any
> > > of them is hwpoisoned or not, but it is somewhat costly in page fault path.
> > >
> > > So introduce a new page flag called HasHWPoisoned on the first tail page.  It
> > > indicates the THP has hwpoisoned subpage(s).  It is set if any subpage of THP
> > > is found hwpoisoned by memory failure and cleared when the THP is freed or
> > > split.
> > >
> > > Cc: <stable@vger.kernel.org>
> > > Suggested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > > Signed-off-by: Yang Shi <shy828301@gmail.com>
> > > ---
> >
> > ...
> >
> > > diff --git a/mm/filemap.c b/mm/filemap.c
> > > index dae481293b5d..740b7afe159a 100644
> > > --- a/mm/filemap.c
> > > +++ b/mm/filemap.c
> > > @@ -3195,12 +3195,14 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
> > >       }
> > >
> > >       if (pmd_none(*vmf->pmd) && PageTransHuge(page)) {
> > > -         vm_fault_t ret = do_set_pmd(vmf, page);
> > > -         if (!ret) {
> > > -                 /* The page is mapped successfully, reference consumed. */
> > > -                 unlock_page(page);
> > > -                 return true;
> > > -         }
> > > +             vm_fault_t ret = do_set_pmd(vmf, page);
> > > +             if (ret == VM_FAULT_FALLBACK)
> > > +                     goto out;
> >
> > Hm.. What? I don't get it. Who will establish page table in the pmd then?
>
> Aha, yeah. It should jump to the below PMD populate section. Will fix
> it in the next version.
>
> >
> > > +             if (!ret) {
> > > +                     /* The page is mapped successfully, reference consumed. */
> > > +                     unlock_page(page);
> > > +                     return true;
> > > +             }
> > >       }
> > >
> > >       if (pmd_none(*vmf->pmd)) {
> > > @@ -3220,6 +3222,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
> > >               return true;
> > >       }
> > >
> > > +out:
> > >       return false;
> > >  }
> > >
> > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > index 5e9ef0fc261e..0574b1613714 100644
> > > --- a/mm/huge_memory.c
> > > +++ b/mm/huge_memory.c
> > > @@ -2426,6 +2426,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> > >       /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
> > >       lruvec = lock_page_lruvec(head);
> > >
> > > +     ClearPageHasHWPoisoned(head);
> > > +
> >
> > Do we serialize the new flag with lock_page() or what? I mean what
> > prevents the flag being set again after this point, but before
> > ClearPageCompound()?
>
> No, not in this patch. But I think we could use refcount. THP split
> would freeze refcount and the split is guaranteed to succeed after
> that point, so refcount can be checked in memory failure. The
> SetPageHasHWPoisoned() call could be moved to __get_hwpoison_page()
> when get_unless_page_zero() bumps the refcount successfully. If the
> refcount is zero it means the THP is under split or being freed, we
> don't care about these two cases.

Setting the flag in __get_hwpoison_page() would make this patch depend
on patch #3. However, this patch probably will be backported to older
versions. To ease the backport, I'd like to have the refcount check in
the same place where THP is checked. So, something like "if
(PageTransHuge(hpage) && page_count(hpage) != 0)".

Then the call to set the flag could be moved to __get_hwpoison_page()
in the following patch (after patch #3). Does this sound good to you?

>
> The THP might be mapped before this flag is set, but the process will
> be killed later, so it seems fine.
>
> >
> > --
> >  Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault
  2021-09-23 20:39       ` Yang Shi
@ 2021-09-24  9:26         ` Kirill A. Shutemov
  2021-09-24 16:44           ` Yang Shi
  0 siblings, 1 reply; 11+ messages in thread
From: Kirill A. Shutemov @ 2021-09-24  9:26 UTC (permalink / raw)
  To: Yang Shi
  Cc: HORIGUCHI NAOYA(堀口 直也),
	Hugh Dickins, Kirill A. Shutemov, Matthew Wilcox, Peter Xu,
	Oscar Salvador, Andrew Morton, Linux MM,
	Linux FS-devel Mailing List, Linux Kernel Mailing List

On Thu, Sep 23, 2021 at 01:39:49PM -0700, Yang Shi wrote:
> On Thu, Sep 23, 2021 at 10:15 AM Yang Shi <shy828301@gmail.com> wrote:
> >
> > On Thu, Sep 23, 2021 at 7:39 AM Kirill A. Shutemov <kirill@shutemov.name> wrote:
> > >
> > > On Wed, Sep 22, 2021 at 08:28:26PM -0700, Yang Shi wrote:
> > > > When handling shmem page fault the THP with corrupted subpage could be PMD
> > > > mapped if certain conditions are satisfied.  But kernel is supposed to
> > > > send SIGBUS when trying to map hwpoisoned page.
> > > >
> > > > There are two paths which may do PMD map: fault around and regular fault.
> > > >
> > > > Before commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths")
> > > > the thing was even worse in fault around path.  The THP could be PMD mapped as
> > > > long as the VMA fits regardless what subpage is accessed and corrupted.  After
> > > > this commit as long as head page is not corrupted the THP could be PMD mapped.
> > > >
> > > > In the regulat fault path the THP could be PMD mapped as long as the corrupted
> > >
> > > s/regulat/regular/
> > >
> > > > page is not accessed and the VMA fits.
> > > >
> > > > This loophole could be fixed by iterating every subpage to check if any
> > > > of them is hwpoisoned or not, but it is somewhat costly in page fault path.
> > > >
> > > > So introduce a new page flag called HasHWPoisoned on the first tail page.  It
> > > > indicates the THP has hwpoisoned subpage(s).  It is set if any subpage of THP
> > > > is found hwpoisoned by memory failure and cleared when the THP is freed or
> > > > split.
> > > >
> > > > Cc: <stable@vger.kernel.org>
> > > > Suggested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > > > Signed-off-by: Yang Shi <shy828301@gmail.com>
> > > > ---
> > >
> > > ...
> > >
> > > > diff --git a/mm/filemap.c b/mm/filemap.c
> > > > index dae481293b5d..740b7afe159a 100644
> > > > --- a/mm/filemap.c
> > > > +++ b/mm/filemap.c
> > > > @@ -3195,12 +3195,14 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
> > > >       }
> > > >
> > > >       if (pmd_none(*vmf->pmd) && PageTransHuge(page)) {
> > > > -         vm_fault_t ret = do_set_pmd(vmf, page);
> > > > -         if (!ret) {
> > > > -                 /* The page is mapped successfully, reference consumed. */
> > > > -                 unlock_page(page);
> > > > -                 return true;
> > > > -         }
> > > > +             vm_fault_t ret = do_set_pmd(vmf, page);
> > > > +             if (ret == VM_FAULT_FALLBACK)
> > > > +                     goto out;
> > >
> > > Hm.. What? I don't get it. Who will establish page table in the pmd then?
> >
> > Aha, yeah. It should jump to the below PMD populate section. Will fix
> > it in the next version.
> >
> > >
> > > > +             if (!ret) {
> > > > +                     /* The page is mapped successfully, reference consumed. */
> > > > +                     unlock_page(page);
> > > > +                     return true;
> > > > +             }
> > > >       }
> > > >
> > > >       if (pmd_none(*vmf->pmd)) {
> > > > @@ -3220,6 +3222,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
> > > >               return true;
> > > >       }
> > > >
> > > > +out:
> > > >       return false;
> > > >  }
> > > >
> > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > > index 5e9ef0fc261e..0574b1613714 100644
> > > > --- a/mm/huge_memory.c
> > > > +++ b/mm/huge_memory.c
> > > > @@ -2426,6 +2426,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> > > >       /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
> > > >       lruvec = lock_page_lruvec(head);
> > > >
> > > > +     ClearPageHasHWPoisoned(head);
> > > > +
> > >
> > > Do we serialize the new flag with lock_page() or what? I mean what
> > > prevents the flag being set again after this point, but before
> > > ClearPageCompound()?
> >
> > No, not in this patch. But I think we could use refcount. THP split
> > would freeze refcount and the split is guaranteed to succeed after
> > that point, so refcount can be checked in memory failure. The
> > SetPageHasHWPoisoned() call could be moved to __get_hwpoison_page()
> > when get_unless_page_zero() bumps the refcount successfully. If the
> > refcount is zero it means the THP is under split or being freed, we
> > don't care about these two cases.
> 
> Setting the flag in __get_hwpoison_page() would make this patch depend
> on patch #3. However, this patch probably will be backported to older
> versions. To ease the backport, I'd like to have the refcount check in
> the same place where THP is checked. So, something like "if
> (PageTransHuge(hpage) && page_count(hpage) != 0)".
> 
> Then the call to set the flag could be moved to __get_hwpoison_page()
> in the following patch (after patch #3). Does this sound good to you?

Could you show the code I'm not sure I follow. page_count(hpage) check
looks racy to me. What if split happens just after the check?

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault
  2021-09-24  9:26         ` Kirill A. Shutemov
@ 2021-09-24 16:44           ` Yang Shi
  0 siblings, 0 replies; 11+ messages in thread
From: Yang Shi @ 2021-09-24 16:44 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: HORIGUCHI NAOYA(堀口 直也),
	Hugh Dickins, Kirill A. Shutemov, Matthew Wilcox, Peter Xu,
	Oscar Salvador, Andrew Morton, Linux MM,
	Linux FS-devel Mailing List, Linux Kernel Mailing List

On Fri, Sep 24, 2021 at 2:26 AM Kirill A. Shutemov <kirill@shutemov.name> wrote:
>
> On Thu, Sep 23, 2021 at 01:39:49PM -0700, Yang Shi wrote:
> > On Thu, Sep 23, 2021 at 10:15 AM Yang Shi <shy828301@gmail.com> wrote:
> > >
> > > On Thu, Sep 23, 2021 at 7:39 AM Kirill A. Shutemov <kirill@shutemov.name> wrote:
> > > >
> > > > On Wed, Sep 22, 2021 at 08:28:26PM -0700, Yang Shi wrote:
> > > > > When handling shmem page fault the THP with corrupted subpage could be PMD
> > > > > mapped if certain conditions are satisfied.  But kernel is supposed to
> > > > > send SIGBUS when trying to map hwpoisoned page.
> > > > >
> > > > > There are two paths which may do PMD map: fault around and regular fault.
> > > > >
> > > > > Before commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths")
> > > > > the thing was even worse in fault around path.  The THP could be PMD mapped as
> > > > > long as the VMA fits regardless what subpage is accessed and corrupted.  After
> > > > > this commit as long as head page is not corrupted the THP could be PMD mapped.
> > > > >
> > > > > In the regulat fault path the THP could be PMD mapped as long as the corrupted
> > > >
> > > > s/regulat/regular/
> > > >
> > > > > page is not accessed and the VMA fits.
> > > > >
> > > > > This loophole could be fixed by iterating every subpage to check if any
> > > > > of them is hwpoisoned or not, but it is somewhat costly in page fault path.
> > > > >
> > > > > So introduce a new page flag called HasHWPoisoned on the first tail page.  It
> > > > > indicates the THP has hwpoisoned subpage(s).  It is set if any subpage of THP
> > > > > is found hwpoisoned by memory failure and cleared when the THP is freed or
> > > > > split.
> > > > >
> > > > > Cc: <stable@vger.kernel.org>
> > > > > Suggested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > > > > Signed-off-by: Yang Shi <shy828301@gmail.com>
> > > > > ---
> > > >
> > > > ...
> > > >
> > > > > diff --git a/mm/filemap.c b/mm/filemap.c
> > > > > index dae481293b5d..740b7afe159a 100644
> > > > > --- a/mm/filemap.c
> > > > > +++ b/mm/filemap.c
> > > > > @@ -3195,12 +3195,14 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
> > > > >       }
> > > > >
> > > > >       if (pmd_none(*vmf->pmd) && PageTransHuge(page)) {
> > > > > -         vm_fault_t ret = do_set_pmd(vmf, page);
> > > > > -         if (!ret) {
> > > > > -                 /* The page is mapped successfully, reference consumed. */
> > > > > -                 unlock_page(page);
> > > > > -                 return true;
> > > > > -         }
> > > > > +             vm_fault_t ret = do_set_pmd(vmf, page);
> > > > > +             if (ret == VM_FAULT_FALLBACK)
> > > > > +                     goto out;
> > > >
> > > > Hm.. What? I don't get it. Who will establish page table in the pmd then?
> > >
> > > Aha, yeah. It should jump to the below PMD populate section. Will fix
> > > it in the next version.
> > >
> > > >
> > > > > +             if (!ret) {
> > > > > +                     /* The page is mapped successfully, reference consumed. */
> > > > > +                     unlock_page(page);
> > > > > +                     return true;
> > > > > +             }
> > > > >       }
> > > > >
> > > > >       if (pmd_none(*vmf->pmd)) {
> > > > > @@ -3220,6 +3222,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
> > > > >               return true;
> > > > >       }
> > > > >
> > > > > +out:
> > > > >       return false;
> > > > >  }
> > > > >
> > > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > > > index 5e9ef0fc261e..0574b1613714 100644
> > > > > --- a/mm/huge_memory.c
> > > > > +++ b/mm/huge_memory.c
> > > > > @@ -2426,6 +2426,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> > > > >       /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
> > > > >       lruvec = lock_page_lruvec(head);
> > > > >
> > > > > +     ClearPageHasHWPoisoned(head);
> > > > > +
> > > >
> > > > Do we serialize the new flag with lock_page() or what? I mean what
> > > > prevents the flag being set again after this point, but before
> > > > ClearPageCompound()?
> > >
> > > No, not in this patch. But I think we could use refcount. THP split
> > > would freeze refcount and the split is guaranteed to succeed after
> > > that point, so refcount can be checked in memory failure. The
> > > SetPageHasHWPoisoned() call could be moved to __get_hwpoison_page()
> > > when get_unless_page_zero() bumps the refcount successfully. If the
> > > refcount is zero it means the THP is under split or being freed, we
> > > don't care about these two cases.
> >
> > Setting the flag in __get_hwpoison_page() would make this patch depend
> > on patch #3. However, this patch probably will be backported to older
> > versions. To ease the backport, I'd like to have the refcount check in
> > the same place where THP is checked. So, something like "if
> > (PageTransHuge(hpage) && page_count(hpage) != 0)".
> >
> > Then the call to set the flag could be moved to __get_hwpoison_page()
> > in the following patch (after patch #3). Does this sound good to you?
>
> Could you show the code I'm not sure I follow. page_count(hpage) check
> looks racy to me. What if split happens just after the check?

Yes, it is racy. The flag has to be set after get_page_unless_zero().
Did some archeology, it seems patch #3 is also applicable to v4.9+.
So, the simplest way may be to have both patch #3 and this patch
backport to stable.



>
> --
>  Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-09-24 16:45 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-23  3:28 [RFC v2 PATCH 0/5] Solve silent data loss caused by poisoned page cache (shmem/tmpfs) Yang Shi
2021-09-23  3:28 ` [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage for PMD page fault Yang Shi
2021-09-23 14:39   ` Kirill A. Shutemov
2021-09-23 17:15     ` Yang Shi
2021-09-23 20:39       ` Yang Shi
2021-09-24  9:26         ` Kirill A. Shutemov
2021-09-24 16:44           ` Yang Shi
2021-09-23  3:28 ` [v2 PATCH 2/5] mm: hwpoison: refactor refcount check handling Yang Shi
2021-09-23  3:28 ` [v2 PATCH 3/5] mm: hwpoison: remove the unnecessary THP check Yang Shi
2021-09-23  3:28 ` [v2 PATCH 4/5] mm: shmem: don't truncate page if memory failure happens Yang Shi
2021-09-23  3:28 ` [v2 PATCH 5/5] mm: hwpoison: handle non-anonymous THP correctly Yang Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).