* + mm-khugepaged-recover-from-poisoned-file-backed-memory.patch added to mm-unstable branch
@ 2023-03-06 21:22 Andrew Morton
0 siblings, 0 replies; 5+ messages in thread
From: Andrew Morton @ 2023-03-06 21:22 UTC (permalink / raw)
To: mm-commits, wangkefeng.wang, tony.luck, tongtiangen, stevensd,
shy828301, osalvador, naoya.horiguchi, linmiaohe, kirill,
jiaqiyan, akpm
The patch titled
Subject: mm/khugepaged: recover from poisoned file-backed memory
has been added to the -mm mm-unstable branch. Its filename is
mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Jiaqi Yan <jiaqiyan@google.com>
Subject: mm/khugepaged: recover from poisoned file-backed memory
Date: Sat, 4 Mar 2023 22:51:12 -0800
Make collapse_file roll back when copying pages failed. More concretely:
- extract copying operations into a separate loop
- postpone the updates for nr_none until both scanning and copying
succeeded
- postpone joining small xarray entries until both scanning and copying
succeeded
- postpone the update operations to NR_XXX_THPS until both scanning and
copying succeeded
- for non-SHMEM file, roll back filemap_nr_thps_inc if scan succeeded but
copying failed
Tested manually:
0. Enable khugepaged on system under test. Mount tmpfs at /mnt/ramdisk.
1. Start a two-thread application. Each thread allocates a chunk of
non-huge memory buffer from /mnt/ramdisk.
2. Pick 4 random buffer address (2 in each thread) and inject
uncorrectable memory errors at physical addresses.
3. Signal both threads to make their memory buffer collapsible, i.e.
calling madvise(MADV_HUGEPAGE).
4. Wait and then check kernel log: khugepaged is able to recover from
poisoned pages by skipping them.
5. Signal both threads to inspect their buffer contents and make sure no
data corruption.
Link: https://lkml.kernel.org/r/20230305065112.1932255-4-jiaqiyan@google.com
Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: "Kirill A . Shutemov" <kirill@shutemov.name>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Tong Tiangen <tongtiangen@huawei.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: David Stevens <stevensd@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
--- a/mm/khugepaged.c~mm-khugepaged-recover-from-poisoned-file-backed-memory
+++ a/mm/khugepaged.c
@@ -1890,6 +1890,9 @@ static int collapse_file(struct mm_struc
{
struct address_space *mapping = file->f_mapping;
struct page *hpage;
+ struct page *page;
+ struct page *tmp;
+ struct folio *folio;
pgoff_t index = 0, end = start + HPAGE_PMD_NR;
LIST_HEAD(pagelist);
XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER);
@@ -1934,8 +1937,7 @@ static int collapse_file(struct mm_struc
xas_set(&xas, start);
for (index = start; index < end; index++) {
- struct page *page = xas_next(&xas);
- struct folio *folio;
+ page = xas_next(&xas);
VM_BUG_ON(index != xas.xa_index);
if (is_shmem) {
@@ -2117,10 +2119,7 @@ out_unlock:
}
nr = thp_nr_pages(hpage);
- if (is_shmem)
- __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr);
- else {
- __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr);
+ if (!is_shmem) {
filemap_nr_thps_inc(mapping);
/*
* Paired with smp_mb() in do_dentry_open() to ensure
@@ -2131,21 +2130,10 @@ out_unlock:
smp_mb();
if (inode_is_open_for_write(mapping->host)) {
result = SCAN_FAIL;
- __mod_lruvec_page_state(hpage, NR_FILE_THPS, -nr);
filemap_nr_thps_dec(mapping);
goto xa_locked;
}
}
-
- if (nr_none) {
- __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none);
- /* nr_none is always 0 for non-shmem. */
- __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none);
- }
-
- /* Join all the small entries into a single multi-index entry */
- xas_set_order(&xas, start, HPAGE_PMD_ORDER);
- xas_store(&xas, hpage);
xa_locked:
xas_unlock_irq(&xas);
xa_unlocked:
@@ -2158,21 +2146,35 @@ xa_unlocked:
try_to_unmap_flush();
if (result == SCAN_SUCCEED) {
- struct page *page, *tmp;
- struct folio *folio;
-
/*
* Replacing old pages with new one has succeeded, now we
- * need to copy the content and free the old pages.
+ * attempt to copy the contents.
*/
index = start;
- list_for_each_entry_safe(page, tmp, &pagelist, lru) {
+ list_for_each_entry(page, &pagelist, lru) {
while (index < page->index) {
clear_highpage(hpage + (index % HPAGE_PMD_NR));
index++;
}
- copy_highpage(hpage + (page->index % HPAGE_PMD_NR),
- page);
+ if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR),
+ page) > 0) {
+ result = SCAN_COPY_MC;
+ break;
+ }
+ index++;
+ }
+ while (result == SCAN_SUCCEED && index < end) {
+ clear_highpage(hpage + (index % HPAGE_PMD_NR));
+ index++;
+ }
+ }
+
+ if (result == SCAN_SUCCEED) {
+ /*
+ * Copying old pages to huge one has succeeded, now we
+ * need to free the old pages.
+ */
+ list_for_each_entry_safe(page, tmp, &pagelist, lru) {
list_del(&page->lru);
page->mapping = NULL;
page_ref_unfreeze(page, 1);
@@ -2180,12 +2182,23 @@ xa_unlocked:
ClearPageUnevictable(page);
unlock_page(page);
put_page(page);
- index++;
}
- while (index < end) {
- clear_highpage(hpage + (index % HPAGE_PMD_NR));
- index++;
+
+ xas_lock_irq(&xas);
+ if (is_shmem)
+ __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr);
+ else
+ __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr);
+
+ if (nr_none) {
+ __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none);
+ /* nr_none is always 0 for non-shmem. */
+ __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none);
}
+ /* Join all the small entries into a single multi-index entry. */
+ xas_set_order(&xas, start, HPAGE_PMD_ORDER);
+ xas_store(&xas, hpage);
+ xas_unlock_irq(&xas);
folio = page_folio(hpage);
folio_mark_uptodate(folio);
@@ -2203,8 +2216,6 @@ xa_unlocked:
unlock_page(hpage);
hpage = NULL;
} else {
- struct page *page;
-
/* Something went wrong: roll back page cache changes */
xas_lock_irq(&xas);
if (nr_none) {
@@ -2238,6 +2249,13 @@ xa_unlocked:
xas_lock_irq(&xas);
}
VM_BUG_ON(nr_none);
+ /*
+ * Undo the updates of filemap_nr_thps_inc for non-SHMEM file only.
+ * This undo is not needed unless failure is due to SCAN_COPY_MC.
+ */
+ if (!is_shmem && result == SCAN_COPY_MC)
+ filemap_nr_thps_dec(mapping);
+
xas_unlock_irq(&xas);
hpage->mapping = NULL;
_
Patches currently in -mm which might be from jiaqiyan@google.com are
mm-khugepaged-recover-from-poisoned-anonymous-memory.patch
mm-hwpoison-introduce-copy_mc_highpage.patch
mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
^ permalink raw reply [flat|nested] 5+ messages in thread
* + mm-khugepaged-recover-from-poisoned-file-backed-memory.patch added to mm-unstable branch
@ 2023-03-29 20:27 Andrew Morton
0 siblings, 0 replies; 5+ messages in thread
From: Andrew Morton @ 2023-03-29 20:27 UTC (permalink / raw)
To: mm-commits, wangkefeng.wang, tony.luck, tongtiangen, stevensd,
shy828301, osalvador, naoya.horiguchi, linmiaohe, kirill,
kirill.shutemov, hughd, jiaqiyan, akpm
The patch titled
Subject: mm/khugepaged: recover from poisoned file-backed memory
has been added to the -mm mm-unstable branch. Its filename is
mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Jiaqi Yan <jiaqiyan@google.com>
Subject: mm/khugepaged: recover from poisoned file-backed memory
Date: Wed, 29 Mar 2023 08:11:21 -0700
Make collapse_file roll back when copying pages failed. More concretely:
- extract copying operations into a separate loop
- postpone the updates for nr_none until both scanning and copying
succeeded
- postpone joining small xarray entries until both scanning and copying
succeeded
- postpone the update operations to NR_XXX_THPS until both scanning and
copying succeeded
- for non-SHMEM file, roll back filemap_nr_thps_inc if scan succeeded but
copying failed
Tested manually:
0. Enable khugepaged on system under test. Mount tmpfs at /mnt/ramdisk.
1. Start a two-thread application. Each thread allocates a chunk of
non-huge memory buffer from /mnt/ramdisk.
2. Pick 4 random buffer address (2 in each thread) and inject
uncorrectable memory errors at physical addresses.
3. Signal both threads to make their memory buffer collapsible, i.e.
calling madvise(MADV_HUGEPAGE).
4. Wait and then check kernel log: khugepaged is able to recover from
poisoned pages by skipping them.
5. Signal both threads to inspect their buffer contents and make sure no
data corruption.
Link: https://lkml.kernel.org/r/20230329151121.949896-4-jiaqiyan@google.com
Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
Cc: David Stevens <stevensd@chromium.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Tong Tiangen <tongtiangen@huawei.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/khugepaged.c | 88 +++++++++++++++++++++++++++++-----------------
1 file changed, 56 insertions(+), 32 deletions(-)
--- a/mm/khugepaged.c~mm-khugepaged-recover-from-poisoned-file-backed-memory
+++ a/mm/khugepaged.c
@@ -1872,6 +1872,9 @@ static int collapse_file(struct mm_struc
{
struct address_space *mapping = file->f_mapping;
struct page *hpage;
+ struct page *page;
+ struct page *tmp;
+ struct folio *folio;
pgoff_t index = 0, end = start + HPAGE_PMD_NR;
LIST_HEAD(pagelist);
XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER);
@@ -1916,8 +1919,7 @@ static int collapse_file(struct mm_struc
xas_set(&xas, start);
for (index = start; index < end; index++) {
- struct page *page = xas_next(&xas);
- struct folio *folio;
+ page = xas_next(&xas);
VM_BUG_ON(index != xas.xa_index);
if (is_shmem) {
@@ -2097,12 +2099,8 @@ out_unlock:
put_page(page);
goto xa_unlocked;
}
- nr = thp_nr_pages(hpage);
- if (is_shmem)
- __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr);
- else {
- __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr);
+ if (!is_shmem) {
filemap_nr_thps_inc(mapping);
/*
* Paired with smp_mb() in do_dentry_open() to ensure
@@ -2113,21 +2111,9 @@ out_unlock:
smp_mb();
if (inode_is_open_for_write(mapping->host)) {
result = SCAN_FAIL;
- __mod_lruvec_page_state(hpage, NR_FILE_THPS, -nr);
filemap_nr_thps_dec(mapping);
- goto xa_locked;
}
}
-
- if (nr_none) {
- __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none);
- /* nr_none is always 0 for non-shmem. */
- __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none);
- }
-
- /* Join all the small entries into a single multi-index entry */
- xas_set_order(&xas, start, HPAGE_PMD_ORDER);
- xas_store(&xas, hpage);
xa_locked:
xas_unlock_irq(&xas);
xa_unlocked:
@@ -2140,21 +2126,36 @@ xa_unlocked:
try_to_unmap_flush();
if (result == SCAN_SUCCEED) {
- struct page *page, *tmp;
- struct folio *folio;
-
/*
* Replacing old pages with new one has succeeded, now we
- * need to copy the content and free the old pages.
+ * attempt to copy the contents.
*/
index = start;
- list_for_each_entry_safe(page, tmp, &pagelist, lru) {
+ list_for_each_entry(page, &pagelist, lru) {
while (index < page->index) {
clear_highpage(hpage + (index % HPAGE_PMD_NR));
index++;
}
- copy_highpage(hpage + (page->index % HPAGE_PMD_NR),
- page);
+ if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR),
+ page) > 0) {
+ result = SCAN_COPY_MC;
+ break;
+ }
+ index++;
+ }
+ while (result == SCAN_SUCCEED && index < end) {
+ clear_highpage(hpage + (index % HPAGE_PMD_NR));
+ index++;
+ }
+ }
+
+ nr = thp_nr_pages(hpage);
+ if (result == SCAN_SUCCEED) {
+ /*
+ * Copying old pages to huge one has succeeded, now we
+ * need to free the old pages.
+ */
+ list_for_each_entry_safe(page, tmp, &pagelist, lru) {
list_del(&page->lru);
page->mapping = NULL;
page_ref_unfreeze(page, 1);
@@ -2162,12 +2163,23 @@ xa_unlocked:
ClearPageUnevictable(page);
unlock_page(page);
put_page(page);
- index++;
}
- while (index < end) {
- clear_highpage(hpage + (index % HPAGE_PMD_NR));
- index++;
+
+ xas_lock_irq(&xas);
+ if (is_shmem)
+ __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr);
+ else
+ __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr);
+
+ if (nr_none) {
+ __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none);
+ /* nr_none is always 0 for non-shmem. */
+ __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none);
}
+ /* Join all the small entries into a single multi-index entry. */
+ xas_set_order(&xas, start, HPAGE_PMD_ORDER);
+ xas_store(&xas, hpage);
+ xas_unlock_irq(&xas);
folio = page_folio(hpage);
folio_mark_uptodate(folio);
@@ -2185,8 +2197,6 @@ xa_unlocked:
unlock_page(hpage);
hpage = NULL;
} else {
- struct page *page;
-
/* Something went wrong: roll back page cache changes */
xas_lock_irq(&xas);
if (nr_none) {
@@ -2220,6 +2230,20 @@ xa_unlocked:
xas_lock_irq(&xas);
}
VM_BUG_ON(nr_none);
+ /*
+ * Undo the updates of filemap_nr_thps_inc for non-SHMEM
+ * file only. This undo is not needed unless failure is
+ * due to SCAN_COPY_MC.
+ */
+ if (!is_shmem && result == SCAN_COPY_MC) {
+ filemap_nr_thps_dec(mapping);
+ /*
+ * Paired with smp_mb() in do_dentry_open() to
+ * ensure the update to nr_thps is visible.
+ */
+ smp_mb();
+ }
+
xas_unlock_irq(&xas);
hpage->mapping = NULL;
_
Patches currently in -mm which might be from jiaqiyan@google.com are
mm-khugepaged-recover-from-poisoned-anonymous-memory.patch
mm-hwpoison-introduce-copy_mc_highpage.patch
mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
^ permalink raw reply [flat|nested] 5+ messages in thread
* + mm-khugepaged-recover-from-poisoned-file-backed-memory.patch added to mm-unstable branch
@ 2023-03-28 20:18 Andrew Morton
0 siblings, 0 replies; 5+ messages in thread
From: Andrew Morton @ 2023-03-28 20:18 UTC (permalink / raw)
To: mm-commits, wangkefeng.wang, tony.luck, tongtiangen, shy828301,
osalvador, naoya.horiguchi, linmiaohe, kirill, kirill.shutemov,
jiaqiyan, akpm
The patch titled
Subject: mm/khugepaged: recover from poisoned file-backed memory
has been added to the -mm mm-unstable branch. Its filename is
mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Jiaqi Yan <jiaqiyan@google.com>
Subject: mm/khugepaged: recover from poisoned file-backed memory
Date: Mon, 27 Mar 2023 14:15:48 -0700
Make collapse_file roll back when copying pages failed. More concretely:
- extract copying operations into a separate loop
- postpone the updates for nr_none until both scanning and copying
succeeded
- postpone joining small xarray entries until both scanning and copying
succeeded
- postpone the update operations to NR_XXX_THPS until both scanning and
copying succeeded
- for non-SHMEM file, roll back filemap_nr_thps_inc if scan succeeded but
copying failed
Tested manually:
0. Enable khugepaged on system under test. Mount tmpfs at /mnt/ramdisk.
1. Start a two-thread application. Each thread allocates a chunk of
non-huge memory buffer from /mnt/ramdisk.
2. Pick 4 random buffer address (2 in each thread) and inject
uncorrectable memory errors at physical addresses.
3. Signal both threads to make their memory buffer collapsible, i.e.
calling madvise(MADV_HUGEPAGE).
4. Wait and then check kernel log: khugepaged is able to recover from
poisoned pages by skipping them.
5. Signal both threads to inspect their buffer contents and make sure no
data corruption.
Link: https://lkml.kernel.org/r/20230327211548.462509-4-jiaqiyan@google.com
Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Tong Tiangen <tongtiangen@huawei.com>
Cc: Tony Luck <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/khugepaged.c | 86 ++++++++++++++++++++++++++++------------------
1 file changed, 54 insertions(+), 32 deletions(-)
--- a/mm/khugepaged.c~mm-khugepaged-recover-from-poisoned-file-backed-memory
+++ a/mm/khugepaged.c
@@ -1874,6 +1874,9 @@ static int collapse_file(struct mm_struc
{
struct address_space *mapping = file->f_mapping;
struct page *hpage;
+ struct page *page;
+ struct page *tmp;
+ struct folio *folio;
pgoff_t index = 0, end = start + HPAGE_PMD_NR;
LIST_HEAD(pagelist);
XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER);
@@ -1918,8 +1921,7 @@ static int collapse_file(struct mm_struc
xas_set(&xas, start);
for (index = start; index < end; index++) {
- struct page *page = xas_next(&xas);
- struct folio *folio;
+ page = xas_next(&xas);
VM_BUG_ON(index != xas.xa_index);
if (is_shmem) {
@@ -2099,12 +2101,8 @@ out_unlock:
put_page(page);
goto xa_unlocked;
}
- nr = thp_nr_pages(hpage);
- if (is_shmem)
- __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr);
- else {
- __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr);
+ if (!is_shmem) {
filemap_nr_thps_inc(mapping);
/*
* Paired with smp_mb() in do_dentry_open() to ensure
@@ -2115,21 +2113,9 @@ out_unlock:
smp_mb();
if (inode_is_open_for_write(mapping->host)) {
result = SCAN_FAIL;
- __mod_lruvec_page_state(hpage, NR_FILE_THPS, -nr);
filemap_nr_thps_dec(mapping);
- goto xa_locked;
}
}
-
- if (nr_none) {
- __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none);
- /* nr_none is always 0 for non-shmem. */
- __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none);
- }
-
- /* Join all the small entries into a single multi-index entry */
- xas_set_order(&xas, start, HPAGE_PMD_ORDER);
- xas_store(&xas, hpage);
xa_locked:
xas_unlock_irq(&xas);
xa_unlocked:
@@ -2142,21 +2128,36 @@ xa_unlocked:
try_to_unmap_flush();
if (result == SCAN_SUCCEED) {
- struct page *page, *tmp;
- struct folio *folio;
-
/*
* Replacing old pages with new one has succeeded, now we
- * need to copy the content and free the old pages.
+ * attempt to copy the contents.
*/
index = start;
- list_for_each_entry_safe(page, tmp, &pagelist, lru) {
+ list_for_each_entry(page, &pagelist, lru) {
while (index < page->index) {
clear_highpage(hpage + (index % HPAGE_PMD_NR));
index++;
}
- copy_highpage(hpage + (page->index % HPAGE_PMD_NR),
- page);
+ if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR),
+ page) > 0) {
+ result = SCAN_COPY_MC;
+ break;
+ }
+ index++;
+ }
+ while (result == SCAN_SUCCEED && index < end) {
+ clear_highpage(hpage + (index % HPAGE_PMD_NR));
+ index++;
+ }
+ }
+
+ nr = thp_nr_pages(hpage);
+ if (result == SCAN_SUCCEED) {
+ /*
+ * Copying old pages to huge one has succeeded, now we
+ * need to free the old pages.
+ */
+ list_for_each_entry_safe(page, tmp, &pagelist, lru) {
list_del(&page->lru);
page->mapping = NULL;
page_ref_unfreeze(page, 1);
@@ -2164,12 +2165,23 @@ xa_unlocked:
ClearPageUnevictable(page);
unlock_page(page);
put_page(page);
- index++;
}
- while (index < end) {
- clear_highpage(hpage + (index % HPAGE_PMD_NR));
- index++;
+
+ xas_lock_irq(&xas);
+ if (is_shmem)
+ __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr);
+ else
+ __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr);
+
+ if (nr_none) {
+ __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none);
+ /* nr_none is always 0 for non-shmem. */
+ __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none);
}
+ /* Join all the small entries into a single multi-index entry. */
+ xas_set_order(&xas, start, HPAGE_PMD_ORDER);
+ xas_store(&xas, hpage);
+ xas_unlock_irq(&xas);
folio = page_folio(hpage);
folio_mark_uptodate(folio);
@@ -2187,8 +2199,6 @@ xa_unlocked:
unlock_page(hpage);
hpage = NULL;
} else {
- struct page *page;
-
/* Something went wrong: roll back page cache changes */
xas_lock_irq(&xas);
if (nr_none) {
@@ -2222,6 +2232,18 @@ xa_unlocked:
xas_lock_irq(&xas);
}
VM_BUG_ON(nr_none);
+ /*
+ * Undo the updates of filemap_nr_thps_inc for non-SHMEM
+ * file only. This undo is not needed unless failure is
+ * due to SCAN_COPY_MC.
+ *
+ * Paired with smp_mb() in do_dentry_open() to ensure the
+ * update to nr_thps is visible.
+ */
+ smp_mb();
+ if (!is_shmem && result == SCAN_COPY_MC)
+ filemap_nr_thps_dec(mapping);
+
xas_unlock_irq(&xas);
hpage->mapping = NULL;
_
Patches currently in -mm which might be from jiaqiyan@google.com are
mm-khugepaged-recover-from-poisoned-anonymous-memory.patch
mm-hwpoison-introduce-copy_mc_highpage.patch
mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
^ permalink raw reply [flat|nested] 5+ messages in thread
* + mm-khugepaged-recover-from-poisoned-file-backed-memory.patch added to mm-unstable branch
@ 2022-12-16 23:49 Andrew Morton
0 siblings, 0 replies; 5+ messages in thread
From: Andrew Morton @ 2022-12-16 23:49 UTC (permalink / raw)
To: mm-commits, wangkefeng.wang, tony.luck, tongtiangen, shy828301,
osalvador, naoya.horiguchi, linmiaohe, kirill.shutemov, jiaqiyan,
akpm
The patch titled
Subject: mm/khugepaged: recover from poisoned file-backed memory
has been added to the -mm mm-unstable branch. Its filename is
mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Jiaqi Yan <jiaqiyan@google.com>
Subject: mm/khugepaged: recover from poisoned file-backed memory
Date: Mon, 5 Dec 2022 15:40:59 -0800
Make collapse_file roll back when copying pages failed. More concretely:
- extract copying operations into a separate loop
- postpone the updates for nr_none until both scanning and copying
succeeded
- postpone joining small xarray entries until both scanning and copying
succeeded
- postpone the update operations to NR_XXX_THPS until both scanning and
copying succeeded
- for non-SHMEM file, roll back filemap_nr_thps_inc if scan succeeded but
copying failed
Tested manually:
0. Enable khugepaged on system under test. Mount tmpfs at /mnt/ramdisk.
1. Start a two-thread application. Each thread allocates a chunk of
non-huge memory buffer from /mnt/ramdisk.
2. Pick 4 random buffer address (2 in each thread) and inject
uncorrectable memory errors at physical addresses.
3. Signal both threads to make their memory buffer collapsible, i.e.
calling madvise(MADV_HUGEPAGE).
4. Wait and then check kernel log: khugepaged is able to recover from
poisoned pages by skipping them.
5. Signal both threads to inspect their buffer contents and make sure no
data corruption.
Link: https://lkml.kernel.org/r/20221205234059.42971-3-jiaqiyan@google.com
Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: <tongtiangen@huawei.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/khugepaged.c | 78 ++++++++++++++++++++++++++++------------------
1 file changed, 48 insertions(+), 30 deletions(-)
--- a/mm/khugepaged.c~mm-khugepaged-recover-from-poisoned-file-backed-memory
+++ a/mm/khugepaged.c
@@ -1839,6 +1839,9 @@ static int collapse_file(struct mm_struc
{
struct address_space *mapping = file->f_mapping;
struct page *hpage;
+ struct page *page;
+ struct page *tmp;
+ struct folio *folio;
pgoff_t index = 0, end = start + HPAGE_PMD_NR;
LIST_HEAD(pagelist);
XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER);
@@ -1883,8 +1886,7 @@ static int collapse_file(struct mm_struc
xas_set(&xas, start);
for (index = start; index < end; index++) {
- struct page *page = xas_next(&xas);
- struct folio *folio;
+ page = xas_next(&xas);
VM_BUG_ON(index != xas.xa_index);
if (is_shmem) {
@@ -2066,10 +2068,7 @@ out_unlock:
}
nr = thp_nr_pages(hpage);
- if (is_shmem)
- __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr);
- else {
- __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr);
+ if (!is_shmem) {
filemap_nr_thps_inc(mapping);
/*
* Paired with smp_mb() in do_dentry_open() to ensure
@@ -2080,21 +2079,10 @@ out_unlock:
smp_mb();
if (inode_is_open_for_write(mapping->host)) {
result = SCAN_FAIL;
- __mod_lruvec_page_state(hpage, NR_FILE_THPS, -nr);
filemap_nr_thps_dec(mapping);
goto xa_locked;
}
}
-
- if (nr_none) {
- __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none);
- /* nr_none is always 0 for non-shmem. */
- __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none);
- }
-
- /* Join all the small entries into a single multi-index entry */
- xas_set_order(&xas, start, HPAGE_PMD_ORDER);
- xas_store(&xas, hpage);
xa_locked:
xas_unlock_irq(&xas);
xa_unlocked:
@@ -2107,21 +2095,35 @@ xa_unlocked:
try_to_unmap_flush();
if (result == SCAN_SUCCEED) {
- struct page *page, *tmp;
- struct folio *folio;
-
/*
* Replacing old pages with new one has succeeded, now we
- * need to copy the content and free the old pages.
+ * attempt to copy the contents.
*/
index = start;
- list_for_each_entry_safe(page, tmp, &pagelist, lru) {
+ list_for_each_entry(page, &pagelist, lru) {
while (index < page->index) {
clear_highpage(hpage + (index % HPAGE_PMD_NR));
index++;
}
- copy_highpage(hpage + (page->index % HPAGE_PMD_NR),
- page);
+ if (copy_mc_page(hpage + (page->index % HPAGE_PMD_NR),
+ page) > 0) {
+ result = SCAN_COPY_MC;
+ break;
+ }
+ index++;
+ }
+ while (result == SCAN_SUCCEED && index < end) {
+ clear_highpage(hpage + (index % HPAGE_PMD_NR));
+ index++;
+ }
+ }
+
+ if (result == SCAN_SUCCEED) {
+ /*
+ * Copying old pages to huge one has succeeded, now we
+ * need to free the old pages.
+ */
+ list_for_each_entry_safe(page, tmp, &pagelist, lru) {
list_del(&page->lru);
page->mapping = NULL;
page_ref_unfreeze(page, 1);
@@ -2129,12 +2131,23 @@ xa_unlocked:
ClearPageUnevictable(page);
unlock_page(page);
put_page(page);
- index++;
}
- while (index < end) {
- clear_highpage(hpage + (index % HPAGE_PMD_NR));
- index++;
+
+ xas_lock_irq(&xas);
+ if (is_shmem)
+ __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr);
+ else
+ __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr);
+
+ if (nr_none) {
+ __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none);
+ /* nr_none is always 0 for non-shmem. */
+ __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none);
}
+ /* Join all the small entries into a single multi-index entry. */
+ xas_set_order(&xas, start, HPAGE_PMD_ORDER);
+ xas_store(&xas, hpage);
+ xas_unlock_irq(&xas);
folio = page_folio(hpage);
folio_mark_uptodate(folio);
@@ -2152,8 +2165,6 @@ xa_unlocked:
unlock_page(hpage);
hpage = NULL;
} else {
- struct page *page;
-
/* Something went wrong: roll back page cache changes */
xas_lock_irq(&xas);
if (nr_none) {
@@ -2187,6 +2198,13 @@ xa_unlocked:
xas_lock_irq(&xas);
}
VM_BUG_ON(nr_none);
+ /*
+ * Undo the updates of filemap_nr_thps_inc for non-SHMEM file only.
+ * This undo is not needed unless failure is due to SCAN_COPY_MC.
+ */
+ if (!is_shmem && result == SCAN_COPY_MC)
+ filemap_nr_thps_dec(mapping);
+
xas_unlock_irq(&xas);
hpage->mapping = NULL;
_
Patches currently in -mm which might be from jiaqiyan@google.com are
mm-khugepaged-recover-from-poisoned-anonymous-memory.patch
mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
^ permalink raw reply [flat|nested] 5+ messages in thread
* + mm-khugepaged-recover-from-poisoned-file-backed-memory.patch added to mm-unstable branch
@ 2022-11-07 20:53 Andrew Morton
0 siblings, 0 replies; 5+ messages in thread
From: Andrew Morton @ 2022-11-07 20:53 UTC (permalink / raw)
To: mm-commits, tony.luck, tongtiangen, shy828301, osalvador,
naoya.horiguchi, linmiaohe, kirill.shutemov, jiaqiyan, akpm
The patch titled
Subject: mm/khugepaged: recover from poisoned file-backed memory
has been added to the -mm mm-unstable branch. Its filename is
mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Jiaqi Yan <jiaqiyan@google.com>
Subject: mm/khugepaged: recover from poisoned file-backed memory
Date: Sun, 6 Nov 2022 18:53:59 -0800
Make collapse_file roll back when copying pages failed. More concretely:
- extract copying operations into a separate loop
- postpone the updates for nr_none until both scanning and copying
succeeded
- postpone joining small xarray entries until both scanning and copying
succeeded
- postpone the update operations to NR_XXX_THPS until both scanning and
copying succeeded
- for non-SHMEM file, roll back filemap_nr_thps_inc if scan succeeded but
copying failed
Tested manually:
0. Enable khugepaged on system under test. Mount tmpfs at /mnt/ramdisk.
1. Start a two-thread application. Each thread allocates a chunk of
non-huge memory buffer from /mnt/ramdisk.
2. Pick 4 random buffer address (2 in each thread) and inject
uncorrectable memory errors at physical addresses.
3. Signal both threads to make their memory buffer collapsible, i.e.
calling madvise(MADV_HUGEPAGE).
4. Wait and then check kernel log: khugepaged is able to recover from
poisoned pages by skipping them.
5. Signal both thread to inspect their buffer contents and make sure no
data corruption.
Link: https://lkml.kernel.org/r/20221107025359.2911028-3-jiaqiyan@google.com
Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Tong Tiangen <tongtiangen@huawei.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/khugepaged.c | 74 +++++++++++++++++++++++++++-------------------
1 file changed, 45 insertions(+), 29 deletions(-)
--- a/mm/khugepaged.c~mm-khugepaged-recover-from-poisoned-file-backed-memory
+++ a/mm/khugepaged.c
@@ -1769,7 +1769,7 @@ static int collapse_file(struct mm_struc
struct collapse_control *cc)
{
struct address_space *mapping = file->f_mapping;
- struct page *hpage;
+ struct page *hpage, *page, *tmp;
pgoff_t index = 0, end = start + HPAGE_PMD_NR;
LIST_HEAD(pagelist);
XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER);
@@ -1814,7 +1814,7 @@ static int collapse_file(struct mm_struc
xas_set(&xas, start);
for (index = start; index < end; index++) {
- struct page *page = xas_next(&xas);
+ page = xas_next(&xas);
VM_BUG_ON(index != xas.xa_index);
if (is_shmem) {
@@ -1996,10 +1996,7 @@ out_unlock:
}
nr = thp_nr_pages(hpage);
- if (is_shmem)
- __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr);
- else {
- __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr);
+ if (!is_shmem) {
filemap_nr_thps_inc(mapping);
/*
* Paired with smp_mb() in do_dentry_open() to ensure
@@ -2010,21 +2007,10 @@ out_unlock:
smp_mb();
if (inode_is_open_for_write(mapping->host)) {
result = SCAN_FAIL;
- __mod_lruvec_page_state(hpage, NR_FILE_THPS, -nr);
filemap_nr_thps_dec(mapping);
goto xa_locked;
}
}
-
- if (nr_none) {
- __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none);
- /* nr_none is always 0 for non-shmem. */
- __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none);
- }
-
- /* Join all the small entries into a single multi-index entry */
- xas_set_order(&xas, start, HPAGE_PMD_ORDER);
- xas_store(&xas, hpage);
xa_locked:
xas_unlock_irq(&xas);
xa_unlocked:
@@ -2037,20 +2023,34 @@ xa_unlocked:
try_to_unmap_flush();
if (result == SCAN_SUCCEED) {
- struct page *page, *tmp;
-
/*
* Replacing old pages with new one has succeeded, now we
- * need to copy the content and free the old pages.
+ * attempt to copy the contents.
*/
index = start;
- list_for_each_entry_safe(page, tmp, &pagelist, lru) {
+ list_for_each_entry(page, &pagelist, lru) {
while (index < page->index) {
clear_highpage(hpage + (index % HPAGE_PMD_NR));
index++;
}
- copy_highpage(hpage + (page->index % HPAGE_PMD_NR),
- page);
+ if (copy_highpage_mc(hpage + (page->index % HPAGE_PMD_NR), page)) {
+ result = SCAN_COPY_MC;
+ break;
+ }
+ index++;
+ }
+ while (result == SCAN_SUCCEED && index < end) {
+ clear_highpage(hpage + (page->index % HPAGE_PMD_NR));
+ index++;
+ }
+ }
+
+ if (result == SCAN_SUCCEED) {
+ /*
+ * Copying old pages to huge one has succeeded, now we
+ * need to free the old pages.
+ */
+ list_for_each_entry_safe(page, tmp, &pagelist, lru) {
list_del(&page->lru);
page->mapping = NULL;
page_ref_unfreeze(page, 1);
@@ -2058,12 +2058,23 @@ xa_unlocked:
ClearPageUnevictable(page);
unlock_page(page);
put_page(page);
- index++;
}
- while (index < end) {
- clear_highpage(hpage + (index % HPAGE_PMD_NR));
- index++;
+
+ xas_lock_irq(&xas);
+ if (is_shmem)
+ __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr);
+ else
+ __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr);
+
+ if (nr_none) {
+ __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none);
+ /* nr_none is always 0 for non-shmem. */
+ __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none);
}
+ /* Join all the small entries into a single multi-index entry. */
+ xas_set_order(&xas, start, HPAGE_PMD_ORDER);
+ xas_store(&xas, hpage);
+ xas_unlock_irq(&xas);
SetPageUptodate(hpage);
page_ref_add(hpage, HPAGE_PMD_NR - 1);
@@ -2079,8 +2090,6 @@ xa_unlocked:
unlock_page(hpage);
hpage = NULL;
} else {
- struct page *page;
-
/* Something went wrong: roll back page cache changes */
xas_lock_irq(&xas);
if (nr_none) {
@@ -2114,6 +2123,13 @@ xa_unlocked:
xas_lock_irq(&xas);
}
VM_BUG_ON(nr_none);
+ /*
+ * Undo the updates of filemap_nr_thps_inc for non-SHMEM file only.
+ * This undo is not needed unless failure is due to SCAN_COPY_MC.
+ */
+ if (!is_shmem && result == SCAN_COPY_MC)
+ filemap_nr_thps_dec(mapping);
+
xas_unlock_irq(&xas);
hpage->mapping = NULL;
_
Patches currently in -mm which might be from jiaqiyan@google.com are
mm-khugepaged-recover-from-poisoned-anonymous-memory.patch
mm-khugepaged-recover-from-poisoned-file-backed-memory.patch
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2023-03-29 20:28 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-06 21:22 + mm-khugepaged-recover-from-poisoned-file-backed-memory.patch added to mm-unstable branch Andrew Morton
-- strict thread matches above, loose matches on Subject: below --
2023-03-29 20:27 Andrew Morton
2023-03-28 20:18 Andrew Morton
2022-12-16 23:49 Andrew Morton
2022-11-07 20:53 Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).