linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/9] Hardening page _refcount
@ 2022-01-26 18:34 Pasha Tatashin
  2022-01-26 18:34 ` [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
                   ` (8 more replies)
  0 siblings, 9 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-26 18:34 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt, hughd

Changelog:
v3:
- Sync with the latest linux-next
v2:
- As suggested by Matthew Wilcox removed "mm: page_ref_add_unless()
  does not trace 'u' argument" patch as page_ref_add_unless is going
  away.
v1:
- sync with the latest linux-next
  RFCv2:
- use the "fetch" variant instead of "return" of atomic instructions
- allow negative values, as we are using all 32-bits of _refcount.


It is hard to root cause _refcount problems, because they usually
manifest after the damage has occurred.  Yet, they can lead to
catastrophic failures such memory corruptions. There were a number
of refcount related issues discovered recently [1], [2], [3].

Improve debugability by adding more checks that ensure that
page->_refcount never turns negative (i.e. double free does not
happen, or free after freeze etc).

- Check for overflow and underflow right from the functions that
  modify _refcount
- Remove set_page_count(), so we do not unconditionally overwrite
  _refcount with an unrestrained value
- Trace return values in all functions that modify _refcount

Applies against next-20220125.

Previous verions:
v2: https://lore.kernel.org/all/20211221150140.988298-1-pasha.tatashin@soleen.com
v1: https://lore.kernel.org/all/20211208203544.2297121-1-pasha.tatashin@soleen.com
RFCv2: https://lore.kernel.org/all/20211117012059.141450-1-pasha.tatashin@soleen.com
RFCv1: https://lore.kernel.org/all/20211026173822.502506-1-pasha.tatashin@soleen.com

[1] https://lore.kernel.org/all/xr9335nxwc5y.fsf@gthelen2.svl.corp.google.com
[2] https://lore.kernel.org/all/1582661774-30925-2-git-send-email-akaher@vmware.com
[3] https://lore.kernel.org/all/20210622021423.154662-3-mike.kravetz@oracle.com

Pasha Tatashin (9):
  mm: add overflow and underflow checks for page->_refcount
  mm: Avoid using set_page_count() in set_page_recounted()
  mm: remove set_page_count() from page_frag_alloc_align
  mm: avoid using set_page_count() when pages are freed into allocator
  mm: rename init_page_count() -> page_ref_init()
  mm: remove set_page_count()
  mm: simplify page_ref_* functions
  mm: do not use atomic_set_release in page_ref_unfreeze()
  mm: use atomic_cmpxchg_acquire in page_ref_freeze().

 arch/m68k/mm/motorola.c         |   2 +-
 include/linux/mm.h              |   2 +-
 include/linux/page_ref.h        | 149 +++++++++++++++-----------------
 include/trace/events/page_ref.h |  58 ++++++++-----
 mm/debug_page_ref.c             |  22 +----
 mm/internal.h                   |   6 +-
 mm/page_alloc.c                 |  19 ++--
 7 files changed, 132 insertions(+), 126 deletions(-)

-- 
2.35.0.rc0.227.g00780c9af4-goog



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount
  2022-01-26 18:34 [PATCH v3 0/9] Hardening page _refcount Pasha Tatashin
@ 2022-01-26 18:34 ` Pasha Tatashin
  2022-01-26 18:59   ` Matthew Wilcox
  2022-01-27 18:30   ` Vlastimil Babka
  2022-01-26 18:34 ` [PATCH v3 2/9] mm: Avoid using set_page_count() in set_page_recounted() Pasha Tatashin
                   ` (7 subsequent siblings)
  8 siblings, 2 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-26 18:34 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt, hughd

The problems with page->_refcount are hard to debug, because usually
when they are detected, the damage has occurred a long time ago. Yet,
the problems with invalid page refcount may be catastrophic and lead to
memory corruptions.

Reduce the scope of when the _refcount problems manifest themselves by
adding checks for underflows and overflows into functions that modify
_refcount.

Use atomic_fetch_* functions to get the old values of the _refcount,
and use it to check for overflow/underflow.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 include/linux/page_ref.h | 59 +++++++++++++++++++++++++++++-----------
 1 file changed, 43 insertions(+), 16 deletions(-)

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 2e677e6ad09f..fe4864f7f69c 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -117,7 +117,10 @@ static inline void init_page_count(struct page *page)
 
 static inline void page_ref_add(struct page *page, int nr)
 {
-	atomic_add(nr, &page->_refcount);
+	int old_val = atomic_fetch_add(nr, &page->_refcount);
+	int new_val = old_val + nr;
+
+	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod))
 		__page_ref_mod(page, nr);
 }
@@ -129,7 +132,10 @@ static inline void folio_ref_add(struct folio *folio, int nr)
 
 static inline void page_ref_sub(struct page *page, int nr)
 {
-	atomic_sub(nr, &page->_refcount);
+	int old_val = atomic_fetch_sub(nr, &page->_refcount);
+	int new_val = old_val - nr;
+
+	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod))
 		__page_ref_mod(page, -nr);
 }
@@ -141,11 +147,13 @@ static inline void folio_ref_sub(struct folio *folio, int nr)
 
 static inline int page_ref_sub_return(struct page *page, int nr)
 {
-	int ret = atomic_sub_return(nr, &page->_refcount);
+	int old_val = atomic_fetch_sub(nr, &page->_refcount);
+	int new_val = old_val - nr;
 
+	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod_and_return))
-		__page_ref_mod_and_return(page, -nr, ret);
-	return ret;
+		__page_ref_mod_and_return(page, -nr, new_val);
+	return new_val;
 }
 
 static inline int folio_ref_sub_return(struct folio *folio, int nr)
@@ -155,7 +163,10 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr)
 
 static inline void page_ref_inc(struct page *page)
 {
-	atomic_inc(&page->_refcount);
+	int old_val = atomic_fetch_inc(&page->_refcount);
+	int new_val = old_val + 1;
+
+	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod))
 		__page_ref_mod(page, 1);
 }
@@ -167,7 +178,10 @@ static inline void folio_ref_inc(struct folio *folio)
 
 static inline void page_ref_dec(struct page *page)
 {
-	atomic_dec(&page->_refcount);
+	int old_val = atomic_fetch_dec(&page->_refcount);
+	int new_val = old_val - 1;
+
+	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod))
 		__page_ref_mod(page, -1);
 }
@@ -179,8 +193,11 @@ static inline void folio_ref_dec(struct folio *folio)
 
 static inline int page_ref_sub_and_test(struct page *page, int nr)
 {
-	int ret = atomic_sub_and_test(nr, &page->_refcount);
+	int old_val = atomic_fetch_sub(nr, &page->_refcount);
+	int new_val = old_val - nr;
+	int ret = new_val == 0;
 
+	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod_and_test))
 		__page_ref_mod_and_test(page, -nr, ret);
 	return ret;
@@ -193,11 +210,13 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr)
 
 static inline int page_ref_inc_return(struct page *page)
 {
-	int ret = atomic_inc_return(&page->_refcount);
+	int old_val = atomic_fetch_inc(&page->_refcount);
+	int new_val = old_val + 1;
 
+	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod_and_return))
-		__page_ref_mod_and_return(page, 1, ret);
-	return ret;
+		__page_ref_mod_and_return(page, 1, new_val);
+	return new_val;
 }
 
 static inline int folio_ref_inc_return(struct folio *folio)
@@ -207,8 +226,11 @@ static inline int folio_ref_inc_return(struct folio *folio)
 
 static inline int page_ref_dec_and_test(struct page *page)
 {
-	int ret = atomic_dec_and_test(&page->_refcount);
+	int old_val = atomic_fetch_dec(&page->_refcount);
+	int new_val = old_val - 1;
+	int ret = new_val == 0;
 
+	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod_and_test))
 		__page_ref_mod_and_test(page, -1, ret);
 	return ret;
@@ -221,11 +243,13 @@ static inline int folio_ref_dec_and_test(struct folio *folio)
 
 static inline int page_ref_dec_return(struct page *page)
 {
-	int ret = atomic_dec_return(&page->_refcount);
+	int old_val = atomic_fetch_dec(&page->_refcount);
+	int new_val = old_val - 1;
 
+	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod_and_return))
-		__page_ref_mod_and_return(page, -1, ret);
-	return ret;
+		__page_ref_mod_and_return(page, -1, new_val);
+	return new_val;
 }
 
 static inline int folio_ref_dec_return(struct folio *folio)
@@ -235,8 +259,11 @@ static inline int folio_ref_dec_return(struct folio *folio)
 
 static inline bool page_ref_add_unless(struct page *page, int nr, int u)
 {
-	bool ret = atomic_add_unless(&page->_refcount, nr, u);
+	int old_val = atomic_fetch_add_unless(&page->_refcount, nr, u);
+	int new_val = old_val + nr;
+	int ret = old_val != u;
 
+	VM_BUG_ON_PAGE(ret && (unsigned int)new_val < (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod_unless))
 		__page_ref_mod_unless(page, nr, ret);
 	return ret;
-- 
2.35.0.rc0.227.g00780c9af4-goog



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 2/9] mm: Avoid using set_page_count() in set_page_recounted()
  2022-01-26 18:34 [PATCH v3 0/9] Hardening page _refcount Pasha Tatashin
  2022-01-26 18:34 ` [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
@ 2022-01-26 18:34 ` Pasha Tatashin
  2022-01-26 18:34 ` [PATCH v3 3/9] mm: remove set_page_count() from page_frag_alloc_align Pasha Tatashin
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-26 18:34 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt, hughd

set_page_refcounted() converts a non-refcounted page that has
(page->_refcount == 0) into a refcounted page by setting _refcount to
1.

The current apporach uses the following logic:

VM_BUG_ON_PAGE(page_ref_count(page), page);
set_page_count(page, 1);

However, if _refcount changes from 0 to 1 between the VM_BUG_ON_PAGE()
and set_page_count() we can break _refcount, which can cause other
problems such as memory corruptions.

Instead, use a safer method: increment _refcount first and verify
that at increment time it was indeed 1.

refcnt = page_ref_inc_return(page);
VM_BUG_ON_PAGE(refcnt != 1, page);

Use page_ref_inc_return() to avoid unconditionally overwriting
the _refcount value with set_page_count(), and check the return value.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 mm/internal.h | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 4c2d06a2f50b..6b74f7f32613 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -141,9 +141,11 @@ static inline bool page_evictable(struct page *page)
  */
 static inline void set_page_refcounted(struct page *page)
 {
+	int refcnt;
+
 	VM_BUG_ON_PAGE(PageTail(page), page);
-	VM_BUG_ON_PAGE(page_ref_count(page), page);
-	set_page_count(page, 1);
+	refcnt = page_ref_inc_return(page);
+	VM_BUG_ON_PAGE(refcnt != 1, page);
 }
 
 extern unsigned long highest_memmap_pfn;
-- 
2.35.0.rc0.227.g00780c9af4-goog



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 3/9] mm: remove set_page_count() from page_frag_alloc_align
  2022-01-26 18:34 [PATCH v3 0/9] Hardening page _refcount Pasha Tatashin
  2022-01-26 18:34 ` [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
  2022-01-26 18:34 ` [PATCH v3 2/9] mm: Avoid using set_page_count() in set_page_recounted() Pasha Tatashin
@ 2022-01-26 18:34 ` Pasha Tatashin
  2022-01-26 18:34 ` [PATCH v3 4/9] mm: avoid using set_page_count() when pages are freed into allocator Pasha Tatashin
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-26 18:34 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt, hughd

set_page_count() unconditionally resets the value of _ref_count and that
is dangerous, as it is not programmatically verified. Instead we rely on
comments like: "OK, page count is 0, we can safely set it".

Add a new refcount function: page_ref_add_return() to return the new
refcount value after adding to it. Use the return value to verify that
the _ref_count was indeed the expected one.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 include/linux/page_ref.h | 11 +++++++++++
 mm/page_alloc.c          |  6 ++++--
 2 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index fe4864f7f69c..03e21ce2f1bd 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -115,6 +115,17 @@ static inline void init_page_count(struct page *page)
 	set_page_count(page, 1);
 }
 
+static inline int page_ref_add_return(struct page *page, int nr)
+{
+	int old_val = atomic_fetch_add(nr, &page->_refcount);
+	int new_val = old_val + nr;
+
+	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
+	if (page_ref_tracepoint_active(page_ref_mod_and_return))
+		__page_ref_mod_and_return(page, nr, new_val);
+	return new_val;
+}
+
 static inline void page_ref_add(struct page *page, int nr)
 {
 	int old_val = atomic_fetch_add(nr, &page->_refcount);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8dd6399bafb5..5a9167bda279 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5528,6 +5528,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc,
 	unsigned int size = PAGE_SIZE;
 	struct page *page;
 	int offset;
+	int refcnt;
 
 	if (unlikely(!nc->va)) {
 refill:
@@ -5566,8 +5567,9 @@ void *page_frag_alloc_align(struct page_frag_cache *nc,
 		/* if size can vary use size else just use PAGE_SIZE */
 		size = nc->size;
 #endif
-		/* OK, page count is 0, we can safely set it */
-		set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
+		/* page count is 0, set it to PAGE_FRAG_CACHE_MAX_SIZE + 1 */
+		refcnt = page_ref_add_return(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
+		VM_BUG_ON_PAGE(refcnt != PAGE_FRAG_CACHE_MAX_SIZE + 1, page);
 
 		/* reset page count bias and offset to start of new frag */
 		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
-- 
2.35.0.rc0.227.g00780c9af4-goog



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 4/9] mm: avoid using set_page_count() when pages are freed into allocator
  2022-01-26 18:34 [PATCH v3 0/9] Hardening page _refcount Pasha Tatashin
                   ` (2 preceding siblings ...)
  2022-01-26 18:34 ` [PATCH v3 3/9] mm: remove set_page_count() from page_frag_alloc_align Pasha Tatashin
@ 2022-01-26 18:34 ` Pasha Tatashin
  2022-01-26 18:34 ` [PATCH v3 5/9] mm: rename init_page_count() -> page_ref_init() Pasha Tatashin
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-26 18:34 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt, hughd

When struct pages are first initialized the page->_refcount field is
set 1. However, later when pages are freed into allocator we set
_refcount to 0 via set_page_count(). Unconditionally resetting
_refcount is dangerous.

Instead use page_ref_dec_return(), and verify that the _refcount is
what is expected.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 mm/page_alloc.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5a9167bda279..0fa100152a2a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1668,6 +1668,7 @@ void __free_pages_core(struct page *page, unsigned int order)
 	unsigned int nr_pages = 1 << order;
 	struct page *p = page;
 	unsigned int loop;
+	int refcnt;
 
 	/*
 	 * When initializing the memmap, __init_single_page() sets the refcount
@@ -1678,10 +1679,12 @@ void __free_pages_core(struct page *page, unsigned int order)
 	for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
 		prefetchw(p + 1);
 		__ClearPageReserved(p);
-		set_page_count(p, 0);
+		refcnt = page_ref_dec_return(p);
+		VM_BUG_ON_PAGE(refcnt, p);
 	}
 	__ClearPageReserved(p);
-	set_page_count(p, 0);
+	refcnt = page_ref_dec_return(p);
+	VM_BUG_ON_PAGE(refcnt, p);
 
 	atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
 
@@ -2253,10 +2256,12 @@ void __init init_cma_reserved_pageblock(struct page *page)
 {
 	unsigned i = pageblock_nr_pages;
 	struct page *p = page;
+	int refcnt;
 
 	do {
 		__ClearPageReserved(p);
-		set_page_count(p, 0);
+		refcnt = page_ref_dec_return(p);
+		VM_BUG_ON_PAGE(refcnt, p);
 	} while (++p, --i);
 
 	set_pageblock_migratetype(page, MIGRATE_CMA);
-- 
2.35.0.rc0.227.g00780c9af4-goog



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 5/9] mm: rename init_page_count() -> page_ref_init()
  2022-01-26 18:34 [PATCH v3 0/9] Hardening page _refcount Pasha Tatashin
                   ` (3 preceding siblings ...)
  2022-01-26 18:34 ` [PATCH v3 4/9] mm: avoid using set_page_count() when pages are freed into allocator Pasha Tatashin
@ 2022-01-26 18:34 ` Pasha Tatashin
  2022-01-26 18:34 ` [PATCH v3 6/9] mm: remove set_page_count() Pasha Tatashin
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-26 18:34 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt, hughd

Now, that set_page_count() is not called from outside anymore and about
to be removed, init_page_count() is the only function that is going to
be used to unconditionally set _refcount, however it is restricted to set
it only to 1.

Make init_page_count() aligned with the other page_ref_*
functions by renaming it.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
---
 arch/m68k/mm/motorola.c  |  2 +-
 include/linux/mm.h       |  2 +-
 include/linux/page_ref.h | 10 +++++++---
 mm/page_alloc.c          |  2 +-
 4 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index ecbe948f4c1a..dd3b77d03d5c 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -133,7 +133,7 @@ void __init init_pointer_table(void *table, int type)
 
 	/* unreserve the page so it's possible to free that page */
 	__ClearPageReserved(PD_PAGE(dp));
-	init_page_count(PD_PAGE(dp));
+	page_ref_init(PD_PAGE(dp));
 
 	return;
 }
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 45bcd6f78141..cd8b9a592235 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2467,7 +2467,7 @@ extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end);
 static inline void free_reserved_page(struct page *page)
 {
 	ClearPageReserved(page);
-	init_page_count(page);
+	page_ref_init(page);
 	__free_page(page);
 	adjust_managed_page_count(page, 1);
 }
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 03e21ce2f1bd..1af12a0d7ba1 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -107,10 +107,14 @@ static inline void folio_set_count(struct folio *folio, int v)
 }
 
 /*
- * Setup the page count before being freed into the page allocator for
- * the first time (boot or memory hotplug)
+ * Setup the page refcount to one before being freed into the page allocator.
+ * The memory might not be initialized and therefore there cannot be any
+ * assumptions about the current value of page->_refcount. This call should be
+ * done during boot when memory is being initialized, during memory hotplug
+ * when new memory is added, or when a previous reserved memory is unreserved
+ * this is the first time kernel take control of the given memory.
  */
-static inline void init_page_count(struct page *page)
+static inline void page_ref_init(struct page *page)
 {
 	set_page_count(page, 1);
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0fa100152a2a..cbe444d74e8a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1570,7 +1570,7 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn,
 {
 	mm_zero_struct_page(page);
 	set_page_links(page, zone, nid, pfn);
-	init_page_count(page);
+	page_ref_init(page);
 	page_mapcount_reset(page);
 	page_cpupid_reset_last(page);
 	page_kasan_tag_reset(page);
-- 
2.35.0.rc0.227.g00780c9af4-goog



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 6/9] mm: remove set_page_count()
  2022-01-26 18:34 [PATCH v3 0/9] Hardening page _refcount Pasha Tatashin
                   ` (4 preceding siblings ...)
  2022-01-26 18:34 ` [PATCH v3 5/9] mm: rename init_page_count() -> page_ref_init() Pasha Tatashin
@ 2022-01-26 18:34 ` Pasha Tatashin
  2022-01-26 18:34 ` [PATCH v3 7/9] mm: simplify page_ref_* functions Pasha Tatashin
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-26 18:34 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt, hughd

set_page_count() is dangerous because it resets _refcount to an
arbitrary value. Instead we now initialize _refcount to 1 only once,
and the rest of the time we are using add/dec/cmpxchg to have a
contiguous track of the counter.

Remove set_page_count() and add new tracing hooks to page_ref_init().

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 include/linux/page_ref.h        | 27 ++++++++-----------
 include/trace/events/page_ref.h | 46 ++++++++++++++++++++++++++++-----
 mm/debug_page_ref.c             |  8 +++---
 3 files changed, 54 insertions(+), 27 deletions(-)

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 1af12a0d7ba1..d7316881626c 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -7,7 +7,7 @@
 #include <linux/page-flags.h>
 #include <linux/tracepoint-defs.h>
 
-DECLARE_TRACEPOINT(page_ref_set);
+DECLARE_TRACEPOINT(page_ref_init);
 DECLARE_TRACEPOINT(page_ref_mod);
 DECLARE_TRACEPOINT(page_ref_mod_and_test);
 DECLARE_TRACEPOINT(page_ref_mod_and_return);
@@ -26,7 +26,7 @@ DECLARE_TRACEPOINT(page_ref_unfreeze);
  */
 #define page_ref_tracepoint_active(t) tracepoint_enabled(t)
 
-extern void __page_ref_set(struct page *page, int v);
+extern void __page_ref_init(struct page *page);
 extern void __page_ref_mod(struct page *page, int v);
 extern void __page_ref_mod_and_test(struct page *page, int v, int ret);
 extern void __page_ref_mod_and_return(struct page *page, int v, int ret);
@@ -38,7 +38,7 @@ extern void __page_ref_unfreeze(struct page *page, int v);
 
 #define page_ref_tracepoint_active(t) false
 
-static inline void __page_ref_set(struct page *page, int v)
+static inline void __page_ref_init(struct page *page)
 {
 }
 static inline void __page_ref_mod(struct page *page, int v)
@@ -94,18 +94,6 @@ static inline int page_count(const struct page *page)
 	return folio_ref_count(page_folio(page));
 }
 
-static inline void set_page_count(struct page *page, int v)
-{
-	atomic_set(&page->_refcount, v);
-	if (page_ref_tracepoint_active(page_ref_set))
-		__page_ref_set(page, v);
-}
-
-static inline void folio_set_count(struct folio *folio, int v)
-{
-	set_page_count(&folio->page, v);
-}
-
 /*
  * Setup the page refcount to one before being freed into the page allocator.
  * The memory might not be initialized and therefore there cannot be any
@@ -116,7 +104,14 @@ static inline void folio_set_count(struct folio *folio, int v)
  */
 static inline void page_ref_init(struct page *page)
 {
-	set_page_count(page, 1);
+	atomic_set(&page->_refcount, 1);
+	if (page_ref_tracepoint_active(page_ref_init))
+		__page_ref_init(page);
+}
+
+static inline void folio_ref_init(struct folio *folio)
+{
+	page_ref_init(&folio->page);
 }
 
 static inline int page_ref_add_return(struct page *page, int nr)
diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h
index 8a99c1cd417b..87551bb1df9e 100644
--- a/include/trace/events/page_ref.h
+++ b/include/trace/events/page_ref.h
@@ -10,6 +10,45 @@
 #include <linux/tracepoint.h>
 #include <trace/events/mmflags.h>
 
+DECLARE_EVENT_CLASS(page_ref_init_template,
+
+	TP_PROTO(struct page *page),
+
+	TP_ARGS(page),
+
+	TP_STRUCT__entry(
+		__field(unsigned long, pfn)
+		__field(unsigned long, flags)
+		__field(int, count)
+		__field(int, mapcount)
+		__field(void *, mapping)
+		__field(int, mt)
+		__field(int, val)
+	),
+
+	TP_fast_assign(
+		__entry->pfn = page_to_pfn(page);
+		__entry->flags = page->flags;
+		__entry->count = page_ref_count(page);
+		__entry->mapcount = page_mapcount(page);
+		__entry->mapping = page->mapping;
+		__entry->mt = get_pageblock_migratetype(page);
+	),
+
+	TP_printk("pfn=0x%lx flags=%s count=%d mapcount=%d mapping=%p mt=%d",
+		__entry->pfn,
+		show_page_flags(__entry->flags & PAGEFLAGS_MASK),
+		__entry->count,
+		__entry->mapcount, __entry->mapping, __entry->mt)
+);
+
+DEFINE_EVENT(page_ref_init_template, page_ref_init,
+
+	TP_PROTO(struct page *page),
+
+	TP_ARGS(page)
+);
+
 DECLARE_EVENT_CLASS(page_ref_mod_template,
 
 	TP_PROTO(struct page *page, int v),
@@ -44,13 +83,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_template,
 		__entry->val)
 );
 
-DEFINE_EVENT(page_ref_mod_template, page_ref_set,
-
-	TP_PROTO(struct page *page, int v),
-
-	TP_ARGS(page, v)
-);
-
 DEFINE_EVENT(page_ref_mod_template, page_ref_mod,
 
 	TP_PROTO(struct page *page, int v),
diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c
index f3b2c9d3ece2..e32149734122 100644
--- a/mm/debug_page_ref.c
+++ b/mm/debug_page_ref.c
@@ -5,12 +5,12 @@
 #define CREATE_TRACE_POINTS
 #include <trace/events/page_ref.h>
 
-void __page_ref_set(struct page *page, int v)
+void __page_ref_init(struct page *page)
 {
-	trace_page_ref_set(page, v);
+	trace_page_ref_init(page);
 }
-EXPORT_SYMBOL(__page_ref_set);
-EXPORT_TRACEPOINT_SYMBOL(page_ref_set);
+EXPORT_SYMBOL(__page_ref_init);
+EXPORT_TRACEPOINT_SYMBOL(page_ref_init);
 
 void __page_ref_mod(struct page *page, int v)
 {
-- 
2.35.0.rc0.227.g00780c9af4-goog



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 7/9] mm: simplify page_ref_* functions
  2022-01-26 18:34 [PATCH v3 0/9] Hardening page _refcount Pasha Tatashin
                   ` (5 preceding siblings ...)
  2022-01-26 18:34 ` [PATCH v3 6/9] mm: remove set_page_count() Pasha Tatashin
@ 2022-01-26 18:34 ` Pasha Tatashin
  2022-01-26 18:34 ` [PATCH v3 8/9] mm: do not use atomic_set_release in page_ref_unfreeze() Pasha Tatashin
  2022-01-26 18:34 ` [PATCH v3 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze() Pasha Tatashin
  8 siblings, 0 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-26 18:34 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt, hughd

Now, that we are using atomic_fetch* variants to add/sub/inc/dec page
_refcount, it makes sense to combined page_ref_* return and non return
functions.

Also remove some extra trace points for non-return  variants. This
improves the tracability by always recording the new _refcount value
after the modifications has occurred.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 include/linux/page_ref.h        | 102 +++++++++-----------------------
 include/trace/events/page_ref.h |  18 +-----
 mm/debug_page_ref.c             |  14 -----
 3 files changed, 31 insertions(+), 103 deletions(-)

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index d7316881626c..243fc60ae6c8 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -8,8 +8,6 @@
 #include <linux/tracepoint-defs.h>
 
 DECLARE_TRACEPOINT(page_ref_init);
-DECLARE_TRACEPOINT(page_ref_mod);
-DECLARE_TRACEPOINT(page_ref_mod_and_test);
 DECLARE_TRACEPOINT(page_ref_mod_and_return);
 DECLARE_TRACEPOINT(page_ref_mod_unless);
 DECLARE_TRACEPOINT(page_ref_freeze);
@@ -27,8 +25,6 @@ DECLARE_TRACEPOINT(page_ref_unfreeze);
 #define page_ref_tracepoint_active(t) tracepoint_enabled(t)
 
 extern void __page_ref_init(struct page *page);
-extern void __page_ref_mod(struct page *page, int v);
-extern void __page_ref_mod_and_test(struct page *page, int v, int ret);
 extern void __page_ref_mod_and_return(struct page *page, int v, int ret);
 extern void __page_ref_mod_unless(struct page *page, int v, int u);
 extern void __page_ref_freeze(struct page *page, int v, int ret);
@@ -41,12 +37,6 @@ extern void __page_ref_unfreeze(struct page *page, int v);
 static inline void __page_ref_init(struct page *page)
 {
 }
-static inline void __page_ref_mod(struct page *page, int v)
-{
-}
-static inline void __page_ref_mod_and_test(struct page *page, int v, int ret)
-{
-}
 static inline void __page_ref_mod_and_return(struct page *page, int v, int ret)
 {
 }
@@ -127,12 +117,7 @@ static inline int page_ref_add_return(struct page *page, int nr)
 
 static inline void page_ref_add(struct page *page, int nr)
 {
-	int old_val = atomic_fetch_add(nr, &page->_refcount);
-	int new_val = old_val + nr;
-
-	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod))
-		__page_ref_mod(page, nr);
+	page_ref_add_return(page, nr);
 }
 
 static inline void folio_ref_add(struct folio *folio, int nr)
@@ -140,30 +125,25 @@ static inline void folio_ref_add(struct folio *folio, int nr)
 	page_ref_add(&folio->page, nr);
 }
 
-static inline void page_ref_sub(struct page *page, int nr)
+static inline int page_ref_sub_return(struct page *page, int nr)
 {
 	int old_val = atomic_fetch_sub(nr, &page->_refcount);
 	int new_val = old_val - nr;
 
 	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod))
-		__page_ref_mod(page, -nr);
+	if (page_ref_tracepoint_active(page_ref_mod_and_return))
+		__page_ref_mod_and_return(page, -nr, new_val);
+	return new_val;
 }
 
-static inline void folio_ref_sub(struct folio *folio, int nr)
+static inline void page_ref_sub(struct page *page, int nr)
 {
-	page_ref_sub(&folio->page, nr);
+	page_ref_sub_return(page, nr);
 }
 
-static inline int page_ref_sub_return(struct page *page, int nr)
+static inline void folio_ref_sub(struct folio *folio, int nr)
 {
-	int old_val = atomic_fetch_sub(nr, &page->_refcount);
-	int new_val = old_val - nr;
-
-	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod_and_return))
-		__page_ref_mod_and_return(page, -nr, new_val);
-	return new_val;
+	page_ref_sub(&folio->page, nr);
 }
 
 static inline int folio_ref_sub_return(struct folio *folio, int nr)
@@ -171,14 +151,20 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr)
 	return page_ref_sub_return(&folio->page, nr);
 }
 
-static inline void page_ref_inc(struct page *page)
+static inline int page_ref_inc_return(struct page *page)
 {
 	int old_val = atomic_fetch_inc(&page->_refcount);
 	int new_val = old_val + 1;
 
 	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod))
-		__page_ref_mod(page, 1);
+	if (page_ref_tracepoint_active(page_ref_mod_and_return))
+		__page_ref_mod_and_return(page, 1, new_val);
+	return new_val;
+}
+
+static inline void page_ref_inc(struct page *page)
+{
+	page_ref_inc_return(page);
 }
 
 static inline void folio_ref_inc(struct folio *folio)
@@ -186,14 +172,20 @@ static inline void folio_ref_inc(struct folio *folio)
 	page_ref_inc(&folio->page);
 }
 
-static inline void page_ref_dec(struct page *page)
+static inline int page_ref_dec_return(struct page *page)
 {
 	int old_val = atomic_fetch_dec(&page->_refcount);
 	int new_val = old_val - 1;
 
 	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod))
-		__page_ref_mod(page, -1);
+	if (page_ref_tracepoint_active(page_ref_mod_and_return))
+		__page_ref_mod_and_return(page, -1, new_val);
+	return new_val;
+}
+
+static inline void page_ref_dec(struct page *page)
+{
+	page_ref_dec_return(page);
 }
 
 static inline void folio_ref_dec(struct folio *folio)
@@ -203,14 +195,7 @@ static inline void folio_ref_dec(struct folio *folio)
 
 static inline int page_ref_sub_and_test(struct page *page, int nr)
 {
-	int old_val = atomic_fetch_sub(nr, &page->_refcount);
-	int new_val = old_val - nr;
-	int ret = new_val == 0;
-
-	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod_and_test))
-		__page_ref_mod_and_test(page, -nr, ret);
-	return ret;
+	return page_ref_sub_return(page, nr) == 0;
 }
 
 static inline int folio_ref_sub_and_test(struct folio *folio, int nr)
@@ -218,17 +203,6 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr)
 	return page_ref_sub_and_test(&folio->page, nr);
 }
 
-static inline int page_ref_inc_return(struct page *page)
-{
-	int old_val = atomic_fetch_inc(&page->_refcount);
-	int new_val = old_val + 1;
-
-	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod_and_return))
-		__page_ref_mod_and_return(page, 1, new_val);
-	return new_val;
-}
-
 static inline int folio_ref_inc_return(struct folio *folio)
 {
 	return page_ref_inc_return(&folio->page);
@@ -236,14 +210,7 @@ static inline int folio_ref_inc_return(struct folio *folio)
 
 static inline int page_ref_dec_and_test(struct page *page)
 {
-	int old_val = atomic_fetch_dec(&page->_refcount);
-	int new_val = old_val - 1;
-	int ret = new_val == 0;
-
-	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod_and_test))
-		__page_ref_mod_and_test(page, -1, ret);
-	return ret;
+	return page_ref_dec_return(page) == 0;
 }
 
 static inline int folio_ref_dec_and_test(struct folio *folio)
@@ -251,17 +218,6 @@ static inline int folio_ref_dec_and_test(struct folio *folio)
 	return page_ref_dec_and_test(&folio->page);
 }
 
-static inline int page_ref_dec_return(struct page *page)
-{
-	int old_val = atomic_fetch_dec(&page->_refcount);
-	int new_val = old_val - 1;
-
-	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod_and_return))
-		__page_ref_mod_and_return(page, -1, new_val);
-	return new_val;
-}
-
 static inline int folio_ref_dec_return(struct folio *folio)
 {
 	return page_ref_dec_return(&folio->page);
diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h
index 87551bb1df9e..35cd795aa7c6 100644
--- a/include/trace/events/page_ref.h
+++ b/include/trace/events/page_ref.h
@@ -49,7 +49,7 @@ DEFINE_EVENT(page_ref_init_template, page_ref_init,
 	TP_ARGS(page)
 );
 
-DECLARE_EVENT_CLASS(page_ref_mod_template,
+DECLARE_EVENT_CLASS(page_ref_unfreeze_template,
 
 	TP_PROTO(struct page *page, int v),
 
@@ -83,13 +83,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_template,
 		__entry->val)
 );
 
-DEFINE_EVENT(page_ref_mod_template, page_ref_mod,
-
-	TP_PROTO(struct page *page, int v),
-
-	TP_ARGS(page, v)
-);
-
 DECLARE_EVENT_CLASS(page_ref_mod_and_test_template,
 
 	TP_PROTO(struct page *page, int v, int ret),
@@ -126,13 +119,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_and_test_template,
 		__entry->val, __entry->ret)
 );
 
-DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_test,
-
-	TP_PROTO(struct page *page, int v, int ret),
-
-	TP_ARGS(page, v, ret)
-);
-
 DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_return,
 
 	TP_PROTO(struct page *page, int v, int ret),
@@ -154,7 +140,7 @@ DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_freeze,
 	TP_ARGS(page, v, ret)
 );
 
-DEFINE_EVENT(page_ref_mod_template, page_ref_unfreeze,
+DEFINE_EVENT(page_ref_unfreeze_template, page_ref_unfreeze,
 
 	TP_PROTO(struct page *page, int v),
 
diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c
index e32149734122..1de9d93cca25 100644
--- a/mm/debug_page_ref.c
+++ b/mm/debug_page_ref.c
@@ -12,20 +12,6 @@ void __page_ref_init(struct page *page)
 EXPORT_SYMBOL(__page_ref_init);
 EXPORT_TRACEPOINT_SYMBOL(page_ref_init);
 
-void __page_ref_mod(struct page *page, int v)
-{
-	trace_page_ref_mod(page, v);
-}
-EXPORT_SYMBOL(__page_ref_mod);
-EXPORT_TRACEPOINT_SYMBOL(page_ref_mod);
-
-void __page_ref_mod_and_test(struct page *page, int v, int ret)
-{
-	trace_page_ref_mod_and_test(page, v, ret);
-}
-EXPORT_SYMBOL(__page_ref_mod_and_test);
-EXPORT_TRACEPOINT_SYMBOL(page_ref_mod_and_test);
-
 void __page_ref_mod_and_return(struct page *page, int v, int ret)
 {
 	trace_page_ref_mod_and_return(page, v, ret);
-- 
2.35.0.rc0.227.g00780c9af4-goog



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 8/9] mm: do not use atomic_set_release in page_ref_unfreeze()
  2022-01-26 18:34 [PATCH v3 0/9] Hardening page _refcount Pasha Tatashin
                   ` (6 preceding siblings ...)
  2022-01-26 18:34 ` [PATCH v3 7/9] mm: simplify page_ref_* functions Pasha Tatashin
@ 2022-01-26 18:34 ` Pasha Tatashin
  2022-01-26 18:34 ` [PATCH v3 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze() Pasha Tatashin
  8 siblings, 0 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-26 18:34 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt, hughd

In we set the old _refcount value after verifying that the old value was
indeed 0.

VM_BUG_ON_PAGE(page_count(page) != 0, page);
< the _refcount may change here>
atomic_set_release(&page->_refcount, count);

To avoid the smal gap where _refcount may change lets verify the time
of the _refcount at the time of the set operation.

Use atomic_xchg_release() and at the set time verify that the value
was 0.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 include/linux/page_ref.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 243fc60ae6c8..9efabeff4e06 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -322,10 +322,9 @@ static inline int folio_ref_freeze(struct folio *folio, int count)
 
 static inline void page_ref_unfreeze(struct page *page, int count)
 {
-	VM_BUG_ON_PAGE(page_count(page) != 0, page);
-	VM_BUG_ON(count == 0);
+	int old_val = atomic_xchg_release(&page->_refcount, count);
 
-	atomic_set_release(&page->_refcount, count);
+	VM_BUG_ON_PAGE(count == 0 || old_val != 0, page);
 	if (page_ref_tracepoint_active(page_ref_unfreeze))
 		__page_ref_unfreeze(page, count);
 }
-- 
2.35.0.rc0.227.g00780c9af4-goog



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze().
  2022-01-26 18:34 [PATCH v3 0/9] Hardening page _refcount Pasha Tatashin
                   ` (7 preceding siblings ...)
  2022-01-26 18:34 ` [PATCH v3 8/9] mm: do not use atomic_set_release in page_ref_unfreeze() Pasha Tatashin
@ 2022-01-26 18:34 ` Pasha Tatashin
  8 siblings, 0 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-26 18:34 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt, hughd

page_ref_freeze and page_ref_unfreeze are designed to be used as a pair.
They protect critical sections where struct page can be modified.

page_ref_unfreeze() is protected by _release() atomic operation, but
page_ref_freeze() is not as it is assumed that cmpxch provides the full
barrier.

Instead, use the appropriate atomic_cmpxchg_acquire() to ensure that
memory model is excplicitly followed.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 include/linux/page_ref.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 9efabeff4e06..45be731d8919 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -308,7 +308,8 @@ static inline bool folio_try_get_rcu(struct folio *folio)
 
 static inline int page_ref_freeze(struct page *page, int count)
 {
-	int ret = likely(atomic_cmpxchg(&page->_refcount, count, 0) == count);
+	int old_val = atomic_cmpxchg_acquire(&page->_refcount, count, 0);
+	int ret = likely(old_val == count);
 
 	if (page_ref_tracepoint_active(page_ref_freeze))
 		__page_ref_freeze(page, count, ret);
-- 
2.35.0.rc0.227.g00780c9af4-goog



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount
  2022-01-26 18:34 ` [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
@ 2022-01-26 18:59   ` Matthew Wilcox
  2022-01-26 19:22     ` Pasha Tatashin
  2022-01-27 18:30   ` Vlastimil Babka
  1 sibling, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2022-01-26 18:59 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: linux-kernel, linux-mm, linux-m68k, anshuman.khandual, akpm,
	william.kucharski, mike.kravetz, vbabka, geert, schmitzmic,
	rostedt, mingo, hannes, guro, songmuchun, weixugc, gthelen,
	rientjes, pjt, hughd

On Wed, Jan 26, 2022 at 06:34:21PM +0000, Pasha Tatashin wrote:
> The problems with page->_refcount are hard to debug, because usually
> when they are detected, the damage has occurred a long time ago. Yet,
> the problems with invalid page refcount may be catastrophic and lead to
> memory corruptions.
> 
> Reduce the scope of when the _refcount problems manifest themselves by
> adding checks for underflows and overflows into functions that modify
> _refcount.

If you're chasing a bug like this, presumably you turn on page
tracepoints.  So could we reduce the cost of this by putting the
VM_BUG_ON_PAGE parts into __page_ref_mod() et al?  Yes, we'd need to
change the arguments to those functions to pass in old & new, but that
should be a cheap change compared to embedding the VM_BUG_ON_PAGE.

>  static inline void page_ref_add(struct page *page, int nr)
>  {
> -	atomic_add(nr, &page->_refcount);
> +	int old_val = atomic_fetch_add(nr, &page->_refcount);
> +	int new_val = old_val + nr;
> +
> +	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
>  	if (page_ref_tracepoint_active(page_ref_mod))
>  		__page_ref_mod(page, nr);
>  }


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount
  2022-01-26 18:59   ` Matthew Wilcox
@ 2022-01-26 19:22     ` Pasha Tatashin
  2022-01-26 19:45       ` Matthew Wilcox
  2022-01-27 18:27       ` Vlastimil Babka
  0 siblings, 2 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-26 19:22 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: LKML, linux-mm, linux-m68k, Anshuman Khandual, Andrew Morton,
	william.kucharski, Mike Kravetz, Vlastimil Babka,
	Geert Uytterhoeven, schmitzmic, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Roman Gushchin, Muchun Song, Wei Xu,
	Greg Thelen, David Rientjes, Paul Turner, Hugh Dickins

On Wed, Jan 26, 2022 at 1:59 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Jan 26, 2022 at 06:34:21PM +0000, Pasha Tatashin wrote:
> > The problems with page->_refcount are hard to debug, because usually
> > when they are detected, the damage has occurred a long time ago. Yet,
> > the problems with invalid page refcount may be catastrophic and lead to
> > memory corruptions.
> >
> > Reduce the scope of when the _refcount problems manifest themselves by
> > adding checks for underflows and overflows into functions that modify
> > _refcount.
>
> If you're chasing a bug like this, presumably you turn on page
> tracepoints.  So could we reduce the cost of this by putting the
> VM_BUG_ON_PAGE parts into __page_ref_mod() et al?  Yes, we'd need to
> change the arguments to those functions to pass in old & new, but that
> should be a cheap change compared to embedding the VM_BUG_ON_PAGE.

This is not only about chasing a bug. This also about preventing
memory corruption and information leaking that are caused by ref_count
bugs from happening.
Several months ago a memory corruption bug was discovered by accident:
an engineer was studying a process core from a production system and
noticed that some memory does not look like it belongs to the original
process. We tried to manually reproduce that bug but failed. However,
later analysis by our team, explained that the problem occured due to
ref_count bug in Linux, and the bug itself was root caused and fixed
(mentioned in the cover letter).  This work would have prevented
similar ref_count bugs from yielding to the memory corruption
situation.

Pasha


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount
  2022-01-26 19:22     ` Pasha Tatashin
@ 2022-01-26 19:45       ` Matthew Wilcox
  2022-01-26 22:40         ` Pasha Tatashin
  2022-01-27 18:27       ` Vlastimil Babka
  1 sibling, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2022-01-26 19:45 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: LKML, linux-mm, linux-m68k, Anshuman Khandual, Andrew Morton,
	william.kucharski, Mike Kravetz, Vlastimil Babka,
	Geert Uytterhoeven, schmitzmic, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Roman Gushchin, Muchun Song, Wei Xu,
	Greg Thelen, David Rientjes, Paul Turner, Hugh Dickins

On Wed, Jan 26, 2022 at 02:22:26PM -0500, Pasha Tatashin wrote:
> On Wed, Jan 26, 2022 at 1:59 PM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Wed, Jan 26, 2022 at 06:34:21PM +0000, Pasha Tatashin wrote:
> > > The problems with page->_refcount are hard to debug, because usually
> > > when they are detected, the damage has occurred a long time ago. Yet,
> > > the problems with invalid page refcount may be catastrophic and lead to
> > > memory corruptions.
> > >
> > > Reduce the scope of when the _refcount problems manifest themselves by
> > > adding checks for underflows and overflows into functions that modify
> > > _refcount.
> >
> > If you're chasing a bug like this, presumably you turn on page
> > tracepoints.  So could we reduce the cost of this by putting the
> > VM_BUG_ON_PAGE parts into __page_ref_mod() et al?  Yes, we'd need to
> > change the arguments to those functions to pass in old & new, but that
> > should be a cheap change compared to embedding the VM_BUG_ON_PAGE.
> 
> This is not only about chasing a bug. This also about preventing
> memory corruption and information leaking that are caused by ref_count
> bugs from happening.
> Several months ago a memory corruption bug was discovered by accident:
> an engineer was studying a process core from a production system and
> noticed that some memory does not look like it belongs to the original
> process. We tried to manually reproduce that bug but failed. However,
> later analysis by our team, explained that the problem occured due to
> ref_count bug in Linux, and the bug itself was root caused and fixed
> (mentioned in the cover letter).  This work would have prevented
> similar ref_count bugs from yielding to the memory corruption
> situation.

But the VM_BUG_ON_PAGE tells us next to nothing useful.  To take
your first example [1] as the kind of thing you say this is going to
help fix:

1. Page p is allocated by thread a (refcount 1)
2. Thread b gets mistaken pointer to p
3. Thread b calls put_page(), __put_page(), page goes to memory
   allocator.
4. Thread c calls alloc_page(), also gets page p (refcount 1 again).
5. Thread a calls put_page(), __put_page()
6. Thread c calls put_page() and gets a VM_BUG_ON_PAGE.

How do we find thread b's involvement?  I don't think we can even see
thread a's involvement in all of this!  All we know is a backtrace
pointing to thread c, who is a completely innocent bystander.  I think
you have to enable page tracepoints to have any shot at finding thread
b's involvement.

[1] https://lore.kernel.org/stable/20211122171825.1582436-1-gthelen@google.com/


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount
  2022-01-26 19:45       ` Matthew Wilcox
@ 2022-01-26 22:40         ` Pasha Tatashin
  0 siblings, 0 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-26 22:40 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: LKML, linux-mm, linux-m68k, Anshuman Khandual, Andrew Morton,
	william.kucharski, Mike Kravetz, Vlastimil Babka,
	Geert Uytterhoeven, schmitzmic, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Roman Gushchin, Muchun Song, Wei Xu,
	Greg Thelen, David Rientjes, Paul Turner, Hugh Dickins

On Wed, Jan 26, 2022 at 2:45 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Jan 26, 2022 at 02:22:26PM -0500, Pasha Tatashin wrote:
> > On Wed, Jan 26, 2022 at 1:59 PM Matthew Wilcox <willy@infradead.org> wrote:
> > >
> > > On Wed, Jan 26, 2022 at 06:34:21PM +0000, Pasha Tatashin wrote:
> > > > The problems with page->_refcount are hard to debug, because usually
> > > > when they are detected, the damage has occurred a long time ago. Yet,
> > > > the problems with invalid page refcount may be catastrophic and lead to
> > > > memory corruptions.
> > > >
> > > > Reduce the scope of when the _refcount problems manifest themselves by
> > > > adding checks for underflows and overflows into functions that modify
> > > > _refcount.
> > >
> > > If you're chasing a bug like this, presumably you turn on page
> > > tracepoints.  So could we reduce the cost of this by putting the
> > > VM_BUG_ON_PAGE parts into __page_ref_mod() et al?  Yes, we'd need to
> > > change the arguments to those functions to pass in old & new, but that
> > > should be a cheap change compared to embedding the VM_BUG_ON_PAGE.
> >
> > This is not only about chasing a bug. This also about preventing
> > memory corruption and information leaking that are caused by ref_count
> > bugs from happening.
> > Several months ago a memory corruption bug was discovered by accident:
> > an engineer was studying a process core from a production system and
> > noticed that some memory does not look like it belongs to the original
> > process. We tried to manually reproduce that bug but failed. However,
> > later analysis by our team, explained that the problem occured due to
> > ref_count bug in Linux, and the bug itself was root caused and fixed
> > (mentioned in the cover letter).  This work would have prevented
> > similar ref_count bugs from yielding to the memory corruption
> > situation.
>
> But the VM_BUG_ON_PAGE tells us next to nothing useful.  To take
> your first example [1] as the kind of thing you say this is going to
> help fix:
>
> 1. Page p is allocated by thread a (refcount 1)
> 2. Thread b gets mistaken pointer to p

Thread b gets a mistaken pointer to p because of a bug in the kernel.
The different types of bugs can lead to such scenarios, and it is
probably not feasible to prevent all of them. However, one of such
scenarios is that we lost control of ref_count, and the page was then
incorrectly remapped or even copied (perhaps migrated) into another
address space.

While studying the logs of the machine on which the double mapping
occured, we noticed that ref_count was underflowed. This was the
smoking gun for the problem, and that is why we concentrated our
search for the root cause of memory leak around places where ref_count
can be incorrectly modified.

This patch series ensures that once we get to a situation where
ref_count is for some reason becomes negative we panic immediately as
there is a possibility that a  leak can occur.

The second benefit of this series is that it makes the ref_count
changes contiguous, with this series we never reset the value to 0,
instead we only operate using offsets and add/sub operations. This
helps with tracing the history of ref_count via tracepoints.

> 3. Thread b calls put_page(), __put_page(), page goes to memory
>    allocator.
> 4. Thread c calls alloc_page(), also gets page p (refcount 1 again).
> 5. Thread a calls put_page(), __put_page()
> 6. Thread c calls put_page() and gets a VM_BUG_ON_PAGE.
>
> How do we find thread b's involvement?  I don't think we can even see
> thread a's involvement in all of this!  All we know is a backtrace
> pointing to thread c, who is a completely innocent bystander.  I think
> you have to enable page tracepoints to have any shot at finding thread
> b's involvement.

You are right, we cannot get to see thread's involvement, we only get
a panic closer to the damage and hopefully prior to leak occurs.
Again, this is just one of the mitigation techniques. Another one is
this page table check [2].

[2] https://lore.kernel.org/all/20211221154650.1047963-1-pasha.tatashin@soleen.com
>
> [1] https://lore.kernel.org/stable/20211122171825.1582436-1-gthelen@google.com/


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount
  2022-01-26 19:22     ` Pasha Tatashin
  2022-01-26 19:45       ` Matthew Wilcox
@ 2022-01-27 18:27       ` Vlastimil Babka
  2022-01-27 19:38         ` Pasha Tatashin
  1 sibling, 1 reply; 18+ messages in thread
From: Vlastimil Babka @ 2022-01-27 18:27 UTC (permalink / raw)
  To: Pasha Tatashin, Matthew Wilcox
  Cc: LKML, linux-mm, linux-m68k, Anshuman Khandual, Andrew Morton,
	william.kucharski, Mike Kravetz, Geert Uytterhoeven, schmitzmic,
	Steven Rostedt, Ingo Molnar, Johannes Weiner, Roman Gushchin,
	Muchun Song, Wei Xu, Greg Thelen, David Rientjes, Paul Turner,
	Hugh Dickins

On 1/26/22 20:22, Pasha Tatashin wrote:
> On Wed, Jan 26, 2022 at 1:59 PM Matthew Wilcox <willy@infradead.org> wrote:
>>
>> On Wed, Jan 26, 2022 at 06:34:21PM +0000, Pasha Tatashin wrote:
>> > The problems with page->_refcount are hard to debug, because usually
>> > when they are detected, the damage has occurred a long time ago. Yet,
>> > the problems with invalid page refcount may be catastrophic and lead to
>> > memory corruptions.
>> >
>> > Reduce the scope of when the _refcount problems manifest themselves by
>> > adding checks for underflows and overflows into functions that modify
>> > _refcount.
>>
>> If you're chasing a bug like this, presumably you turn on page
>> tracepoints.  So could we reduce the cost of this by putting the
>> VM_BUG_ON_PAGE parts into __page_ref_mod() et al?  Yes, we'd need to
>> change the arguments to those functions to pass in old & new, but that
>> should be a cheap change compared to embedding the VM_BUG_ON_PAGE.
> 
> This is not only about chasing a bug. This also about preventing
> memory corruption and information leaking that are caused by ref_count
> bugs from happening.

So you mean it like a security hardening feature, not just debugging? To me
it's dubious to put security hardening under CONFIG_DEBUG_VM. I think it's
just Fedora that uses DEBUG_VM in general production kernels?

> Several months ago a memory corruption bug was discovered by accident:
> an engineer was studying a process core from a production system and
> noticed that some memory does not look like it belongs to the original
> process. We tried to manually reproduce that bug but failed. However,
> later analysis by our team, explained that the problem occured due to
> ref_count bug in Linux, and the bug itself was root caused and fixed
> (mentioned in the cover letter).  This work would have prevented
> similar ref_count bugs from yielding to the memory corruption
> situation.
> 
> Pasha
> 



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount
  2022-01-26 18:34 ` [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
  2022-01-26 18:59   ` Matthew Wilcox
@ 2022-01-27 18:30   ` Vlastimil Babka
  2022-01-27 19:42     ` Pasha Tatashin
  1 sibling, 1 reply; 18+ messages in thread
From: Vlastimil Babka @ 2022-01-27 18:30 UTC (permalink / raw)
  To: Pasha Tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	geert, schmitzmic, rostedt, mingo, hannes, guro, songmuchun,
	weixugc, gthelen, rientjes, pjt, hughd

On 1/26/22 19:34, Pasha Tatashin wrote:
> The problems with page->_refcount are hard to debug, because usually
> when they are detected, the damage has occurred a long time ago. Yet,
> the problems with invalid page refcount may be catastrophic and lead to
> memory corruptions.
> 
> Reduce the scope of when the _refcount problems manifest themselves by
> adding checks for underflows and overflows into functions that modify
> _refcount.
> 
> Use atomic_fetch_* functions to get the old values of the _refcount,
> and use it to check for overflow/underflow.
> 
> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  include/linux/page_ref.h | 59 +++++++++++++++++++++++++++++-----------
>  1 file changed, 43 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
> index 2e677e6ad09f..fe4864f7f69c 100644
> --- a/include/linux/page_ref.h
> +++ b/include/linux/page_ref.h
> @@ -117,7 +117,10 @@ static inline void init_page_count(struct page *page)
>  
>  static inline void page_ref_add(struct page *page, int nr)
>  {
> -	atomic_add(nr, &page->_refcount);
> +	int old_val = atomic_fetch_add(nr, &page->_refcount);
> +	int new_val = old_val + nr;
> +
> +	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);

This seems somewhat weird, as it will trigger not just on overflow, but also
if nr is negative. Which I think is valid usage, even though the function
has 'add' in name, because 'nr' is signed?



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount
  2022-01-27 18:27       ` Vlastimil Babka
@ 2022-01-27 19:38         ` Pasha Tatashin
  0 siblings, 0 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-27 19:38 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Matthew Wilcox, LKML, linux-mm, linux-m68k, Anshuman Khandual,
	Andrew Morton, william.kucharski, Mike Kravetz,
	Geert Uytterhoeven, schmitzmic, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Roman Gushchin, Muchun Song, Wei Xu,
	Greg Thelen, David Rientjes, Paul Turner, Hugh Dickins

> > This is not only about chasing a bug. This also about preventing
> > memory corruption and information leaking that are caused by ref_count
> > bugs from happening.
>
> So you mean it like a security hardening feature, not just debugging? To me
> it's dubious to put security hardening under CONFIG_DEBUG_VM. I think it's
> just Fedora that uses DEBUG_VM in general production kernels?

In our (Google) internal kernel, I added another macro:
PAGE_REF_BUG(cond, page) to replace VM_BUG_ON_PAGE() in page_ref.h.
The new macro keeps the asserts always enabled.  I was thinking of
adding something like this to the upstream kernel as well, however, I
am worried about performance implications of having extra conditions
in these routines, so I think we would need yet another config which
decouples DEBUG_VM and some security crucial VM asserts. However, to
reduce controversial discussions, I decided not to do this as part of
this series, and perhaps do it as a follow-up work.

Pasha


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount
  2022-01-27 18:30   ` Vlastimil Babka
@ 2022-01-27 19:42     ` Pasha Tatashin
  0 siblings, 0 replies; 18+ messages in thread
From: Pasha Tatashin @ 2022-01-27 19:42 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: LKML, linux-mm, linux-m68k, Anshuman Khandual, Matthew Wilcox,
	Andrew Morton, william.kucharski, Mike Kravetz,
	Geert Uytterhoeven, schmitzmic, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Roman Gushchin, Muchun Song, Wei Xu,
	Greg Thelen, David Rientjes, Paul Turner, Hugh Dickins

> > diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
> > index 2e677e6ad09f..fe4864f7f69c 100644
> > --- a/include/linux/page_ref.h
> > +++ b/include/linux/page_ref.h
> > @@ -117,7 +117,10 @@ static inline void init_page_count(struct page *page)
> >
> >  static inline void page_ref_add(struct page *page, int nr)
> >  {
> > -     atomic_add(nr, &page->_refcount);
> > +     int old_val = atomic_fetch_add(nr, &page->_refcount);
> > +     int new_val = old_val + nr;
> > +
> > +     VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
>
> This seems somewhat weird, as it will trigger not just on overflow, but also
> if nr is negative. Which I think is valid usage, even though the function
> has 'add' in name, because 'nr' is signed?

I have not found any places in the mainline kernel where nr is
negative in page_ref_add(). I think, by adding this assert we ensure
that when 'add' shows up in backtraces it can be assured that the ref
count has increased, and if page_ref_sub() showed up it means it
decreased. It is strange to have both functions, and yet allow them to
do the opposite. We can also change the type to unsigned.

Pasha


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-01-27 19:42 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-26 18:34 [PATCH v3 0/9] Hardening page _refcount Pasha Tatashin
2022-01-26 18:34 ` [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
2022-01-26 18:59   ` Matthew Wilcox
2022-01-26 19:22     ` Pasha Tatashin
2022-01-26 19:45       ` Matthew Wilcox
2022-01-26 22:40         ` Pasha Tatashin
2022-01-27 18:27       ` Vlastimil Babka
2022-01-27 19:38         ` Pasha Tatashin
2022-01-27 18:30   ` Vlastimil Babka
2022-01-27 19:42     ` Pasha Tatashin
2022-01-26 18:34 ` [PATCH v3 2/9] mm: Avoid using set_page_count() in set_page_recounted() Pasha Tatashin
2022-01-26 18:34 ` [PATCH v3 3/9] mm: remove set_page_count() from page_frag_alloc_align Pasha Tatashin
2022-01-26 18:34 ` [PATCH v3 4/9] mm: avoid using set_page_count() when pages are freed into allocator Pasha Tatashin
2022-01-26 18:34 ` [PATCH v3 5/9] mm: rename init_page_count() -> page_ref_init() Pasha Tatashin
2022-01-26 18:34 ` [PATCH v3 6/9] mm: remove set_page_count() Pasha Tatashin
2022-01-26 18:34 ` [PATCH v3 7/9] mm: simplify page_ref_* functions Pasha Tatashin
2022-01-26 18:34 ` [PATCH v3 8/9] mm: do not use atomic_set_release in page_ref_unfreeze() Pasha Tatashin
2022-01-26 18:34 ` [PATCH v3 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze() Pasha Tatashin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).