All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/9] Hardening page _refcount
@ 2021-12-21 15:01 Pasha Tatashin
  2021-12-21 15:01 ` [PATCH v2 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
                   ` (8 more replies)
  0 siblings, 9 replies; 11+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt

From: Pasha Tatashin <tatashin@google.com>

Changelog:
v2:
- As suggested by Matthew Wilcox removed "mm: page_ref_add_unless()
  does not trace 'u' argument" patch as page_ref_add_unless is going
  away.
v1:
- sync with the latest linux-next
  RFCv2:
- use the "fetch" variant instead of "return" of atomic instructions
- allow negative values, as we are using all 32-bits of _refcount.


It is hard to root cause _refcount problems, because they usually
manifest after the damage has occurred.  Yet, they can lead to
catastrophic failures such memory corruptions. There were a number
of refcount related issues discovered recently [1], [2], [3].

Improve debugability by adding more checks that ensure that
page->_refcount never turns negative (i.e. double free does not
happen, or free after freeze etc).

- Check for overflow and underflow right from the functions that
  modify _refcount
- Remove set_page_count(), so we do not unconditionally overwrite
  _refcount with an unrestrained value
- Trace return values in all functions that modify _refcount

Applies against next-20211221.

Previous verions:
v1: https://lore.kernel.org/all/20211208203544.2297121-1-pasha.tatashin@soleen.com
RFCv2: https://lore.kernel.org/all/20211117012059.141450-1-pasha.tatashin@soleen.com
RFCv1: https://lore.kernel.org/all/20211026173822.502506-1-pasha.tatashin@soleen.com

[1] https://lore.kernel.org/all/xr9335nxwc5y.fsf@gthelen2.svl.corp.google.com
[2] https://lore.kernel.org/all/1582661774-30925-2-git-send-email-akaher@vmware.com
[3] https://lore.kernel.org/all/20210622021423.154662-3-mike.kravetz@oracle.com

Pasha Tatashin (9):
  mm: add overflow and underflow checks for page->_refcount
  mm: Avoid using set_page_count() in set_page_recounted()
  mm: remove set_page_count() from page_frag_alloc_align
  mm: avoid using set_page_count() when pages are freed into allocator
  mm: rename init_page_count() -> page_ref_init()
  mm: remove set_page_count()
  mm: simplify page_ref_* functions
  mm: do not use atomic_set_release in page_ref_unfreeze()
  mm: use atomic_cmpxchg_acquire in page_ref_freeze().

 arch/m68k/mm/motorola.c         |   2 +-
 include/linux/mm.h              |   2 +-
 include/linux/page_ref.h        | 149 +++++++++++++++-----------------
 include/trace/events/page_ref.h |  58 ++++++++-----
 mm/debug_page_ref.c             |  22 +----
 mm/internal.h                   |   6 +-
 mm/page_alloc.c                 |  19 ++--
 7 files changed, 132 insertions(+), 126 deletions(-)

-- 
2.34.1.307.g9b7440fafd-goog


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/9] mm: add overflow and underflow checks for page->_refcount
  2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
  2021-12-21 15:01 ` [PATCH v2 2/9] mm: Avoid using set_page_count() in set_page_recounted() Pasha Tatashin
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt

The problems with page->_refcount are hard to debug, because usually
when they are detected, the damage has occurred a long time ago. Yet,
the problems with invalid page refcount may be catastrophic and lead to
memory corruptions.

Reduce the scope of when the _refcount problems manifest themselves by
adding checks for underflows and overflows into functions that modify
_refcount.

Use atomic_fetch_* functions to get the old values of the _refcount,
and use it to check for overflow/underflow.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 include/linux/page_ref.h | 59 +++++++++++++++++++++++++++++-----------
 1 file changed, 43 insertions(+), 16 deletions(-)

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 2e677e6ad09f..fe4864f7f69c 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -117,7 +117,10 @@ static inline void init_page_count(struct page *page)
 
 static inline void page_ref_add(struct page *page, int nr)
 {
-	atomic_add(nr, &page->_refcount);
+	int old_val = atomic_fetch_add(nr, &page->_refcount);
+	int new_val = old_val + nr;
+
+	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod))
 		__page_ref_mod(page, nr);
 }
@@ -129,7 +132,10 @@ static inline void folio_ref_add(struct folio *folio, int nr)
 
 static inline void page_ref_sub(struct page *page, int nr)
 {
-	atomic_sub(nr, &page->_refcount);
+	int old_val = atomic_fetch_sub(nr, &page->_refcount);
+	int new_val = old_val - nr;
+
+	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod))
 		__page_ref_mod(page, -nr);
 }
@@ -141,11 +147,13 @@ static inline void folio_ref_sub(struct folio *folio, int nr)
 
 static inline int page_ref_sub_return(struct page *page, int nr)
 {
-	int ret = atomic_sub_return(nr, &page->_refcount);
+	int old_val = atomic_fetch_sub(nr, &page->_refcount);
+	int new_val = old_val - nr;
 
+	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod_and_return))
-		__page_ref_mod_and_return(page, -nr, ret);
-	return ret;
+		__page_ref_mod_and_return(page, -nr, new_val);
+	return new_val;
 }
 
 static inline int folio_ref_sub_return(struct folio *folio, int nr)
@@ -155,7 +163,10 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr)
 
 static inline void page_ref_inc(struct page *page)
 {
-	atomic_inc(&page->_refcount);
+	int old_val = atomic_fetch_inc(&page->_refcount);
+	int new_val = old_val + 1;
+
+	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod))
 		__page_ref_mod(page, 1);
 }
@@ -167,7 +178,10 @@ static inline void folio_ref_inc(struct folio *folio)
 
 static inline void page_ref_dec(struct page *page)
 {
-	atomic_dec(&page->_refcount);
+	int old_val = atomic_fetch_dec(&page->_refcount);
+	int new_val = old_val - 1;
+
+	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod))
 		__page_ref_mod(page, -1);
 }
@@ -179,8 +193,11 @@ static inline void folio_ref_dec(struct folio *folio)
 
 static inline int page_ref_sub_and_test(struct page *page, int nr)
 {
-	int ret = atomic_sub_and_test(nr, &page->_refcount);
+	int old_val = atomic_fetch_sub(nr, &page->_refcount);
+	int new_val = old_val - nr;
+	int ret = new_val == 0;
 
+	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod_and_test))
 		__page_ref_mod_and_test(page, -nr, ret);
 	return ret;
@@ -193,11 +210,13 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr)
 
 static inline int page_ref_inc_return(struct page *page)
 {
-	int ret = atomic_inc_return(&page->_refcount);
+	int old_val = atomic_fetch_inc(&page->_refcount);
+	int new_val = old_val + 1;
 
+	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod_and_return))
-		__page_ref_mod_and_return(page, 1, ret);
-	return ret;
+		__page_ref_mod_and_return(page, 1, new_val);
+	return new_val;
 }
 
 static inline int folio_ref_inc_return(struct folio *folio)
@@ -207,8 +226,11 @@ static inline int folio_ref_inc_return(struct folio *folio)
 
 static inline int page_ref_dec_and_test(struct page *page)
 {
-	int ret = atomic_dec_and_test(&page->_refcount);
+	int old_val = atomic_fetch_dec(&page->_refcount);
+	int new_val = old_val - 1;
+	int ret = new_val == 0;
 
+	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod_and_test))
 		__page_ref_mod_and_test(page, -1, ret);
 	return ret;
@@ -221,11 +243,13 @@ static inline int folio_ref_dec_and_test(struct folio *folio)
 
 static inline int page_ref_dec_return(struct page *page)
 {
-	int ret = atomic_dec_return(&page->_refcount);
+	int old_val = atomic_fetch_dec(&page->_refcount);
+	int new_val = old_val - 1;
 
+	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod_and_return))
-		__page_ref_mod_and_return(page, -1, ret);
-	return ret;
+		__page_ref_mod_and_return(page, -1, new_val);
+	return new_val;
 }
 
 static inline int folio_ref_dec_return(struct folio *folio)
@@ -235,8 +259,11 @@ static inline int folio_ref_dec_return(struct folio *folio)
 
 static inline bool page_ref_add_unless(struct page *page, int nr, int u)
 {
-	bool ret = atomic_add_unless(&page->_refcount, nr, u);
+	int old_val = atomic_fetch_add_unless(&page->_refcount, nr, u);
+	int new_val = old_val + nr;
+	int ret = old_val != u;
 
+	VM_BUG_ON_PAGE(ret && (unsigned int)new_val < (unsigned int)old_val, page);
 	if (page_ref_tracepoint_active(page_ref_mod_unless))
 		__page_ref_mod_unless(page, nr, ret);
 	return ret;
-- 
2.34.1.307.g9b7440fafd-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 2/9] mm: Avoid using set_page_count() in set_page_recounted()
  2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
  2021-12-21 15:01 ` [PATCH v2 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
  2021-12-21 15:01 ` [PATCH v2 3/9] mm: remove set_page_count() from page_frag_alloc_align Pasha Tatashin
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt

set_page_refcounted() converts a non-refcounted page that has
(page->_refcount == 0) into a refcounted page by setting _refcount to
1.

The current apporach uses the following logic:

VM_BUG_ON_PAGE(page_ref_count(page), page);
set_page_count(page, 1);

However, if _refcount changes from 0 to 1 between the VM_BUG_ON_PAGE()
and set_page_count() we can break _refcount, which can cause other
problems such as memory corruptions.

Instead, use a safer method: increment _refcount first and verify
that at increment time it was indeed 1.

refcnt = page_ref_inc_return(page);
VM_BUG_ON_PAGE(refcnt != 1, page);

Use page_ref_inc_return() to avoid unconditionally overwriting
the _refcount value with set_page_count(), and check the return value.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 mm/internal.h | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index deb9bda18e59..4d45ef2ffea6 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -162,9 +162,11 @@ static inline bool page_evictable(struct page *page)
  */
 static inline void set_page_refcounted(struct page *page)
 {
+	int refcnt;
+
 	VM_BUG_ON_PAGE(PageTail(page), page);
-	VM_BUG_ON_PAGE(page_ref_count(page), page);
-	set_page_count(page, 1);
+	refcnt = page_ref_inc_return(page);
+	VM_BUG_ON_PAGE(refcnt != 1, page);
 }
 
 extern unsigned long highest_memmap_pfn;
-- 
2.34.1.307.g9b7440fafd-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 3/9] mm: remove set_page_count() from page_frag_alloc_align
  2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
  2021-12-21 15:01 ` [PATCH v2 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
  2021-12-21 15:01 ` [PATCH v2 2/9] mm: Avoid using set_page_count() in set_page_recounted() Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
  2021-12-21 15:01 ` [PATCH v2 4/9] mm: avoid using set_page_count() when pages are freed into allocator Pasha Tatashin
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt

set_page_count() unconditionally resets the value of _ref_count and that
is dangerous, as it is not programmatically verified. Instead we rely on
comments like: "OK, page count is 0, we can safely set it".

Add a new refcount function: page_ref_add_return() to return the new
refcount value after adding to it. Use the return value to verify that
the _ref_count was indeed the expected one.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 include/linux/page_ref.h | 11 +++++++++++
 mm/page_alloc.c          |  6 ++++--
 2 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index fe4864f7f69c..03e21ce2f1bd 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -115,6 +115,17 @@ static inline void init_page_count(struct page *page)
 	set_page_count(page, 1);
 }
 
+static inline int page_ref_add_return(struct page *page, int nr)
+{
+	int old_val = atomic_fetch_add(nr, &page->_refcount);
+	int new_val = old_val + nr;
+
+	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
+	if (page_ref_tracepoint_active(page_ref_mod_and_return))
+		__page_ref_mod_and_return(page, nr, new_val);
+	return new_val;
+}
+
 static inline void page_ref_add(struct page *page, int nr)
 {
 	int old_val = atomic_fetch_add(nr, &page->_refcount);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index edfd6c81af82..b5554767b9de 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5523,6 +5523,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc,
 	unsigned int size = PAGE_SIZE;
 	struct page *page;
 	int offset;
+	int refcnt;
 
 	if (unlikely(!nc->va)) {
 refill:
@@ -5561,8 +5562,9 @@ void *page_frag_alloc_align(struct page_frag_cache *nc,
 		/* if size can vary use size else just use PAGE_SIZE */
 		size = nc->size;
 #endif
-		/* OK, page count is 0, we can safely set it */
-		set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
+		/* page count is 0, set it to PAGE_FRAG_CACHE_MAX_SIZE + 1 */
+		refcnt = page_ref_add_return(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
+		VM_BUG_ON_PAGE(refcnt != PAGE_FRAG_CACHE_MAX_SIZE + 1, page);
 
 		/* reset page count bias and offset to start of new frag */
 		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
-- 
2.34.1.307.g9b7440fafd-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 4/9] mm: avoid using set_page_count() when pages are freed into allocator
  2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
                   ` (2 preceding siblings ...)
  2021-12-21 15:01 ` [PATCH v2 3/9] mm: remove set_page_count() from page_frag_alloc_align Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
  2021-12-21 15:01 ` [PATCH v2 5/9] mm: rename init_page_count() -> page_ref_init() Pasha Tatashin
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt

When struct pages are first initialized the page->_refcount field is
set 1. However, later when pages are freed into allocator we set
_refcount to 0 via set_page_count(). Unconditionally resetting
_refcount is dangerous.

Instead use page_ref_dec_return(), and verify that the _refcount is
what is expected.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 mm/page_alloc.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b5554767b9de..13d989d62012 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1667,6 +1667,7 @@ void __free_pages_core(struct page *page, unsigned int order)
 	unsigned int nr_pages = 1 << order;
 	struct page *p = page;
 	unsigned int loop;
+	int refcnt;
 
 	/*
 	 * When initializing the memmap, __init_single_page() sets the refcount
@@ -1677,10 +1678,12 @@ void __free_pages_core(struct page *page, unsigned int order)
 	for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
 		prefetchw(p + 1);
 		__ClearPageReserved(p);
-		set_page_count(p, 0);
+		refcnt = page_ref_dec_return(p);
+		VM_BUG_ON_PAGE(refcnt, p);
 	}
 	__ClearPageReserved(p);
-	set_page_count(p, 0);
+	refcnt = page_ref_dec_return(p);
+	VM_BUG_ON_PAGE(refcnt, p);
 
 	atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
 
@@ -2252,10 +2255,12 @@ void __init init_cma_reserved_pageblock(struct page *page)
 {
 	unsigned i = pageblock_nr_pages;
 	struct page *p = page;
+	int refcnt;
 
 	do {
 		__ClearPageReserved(p);
-		set_page_count(p, 0);
+		refcnt = page_ref_dec_return(p);
+		VM_BUG_ON_PAGE(refcnt, p);
 	} while (++p, --i);
 
 	set_pageblock_migratetype(page, MIGRATE_CMA);
-- 
2.34.1.307.g9b7440fafd-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 5/9] mm: rename init_page_count() -> page_ref_init()
  2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
                   ` (3 preceding siblings ...)
  2021-12-21 15:01 ` [PATCH v2 4/9] mm: avoid using set_page_count() when pages are freed into allocator Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
  2021-12-21 15:01 ` [PATCH v2 6/9] mm: remove set_page_count() Pasha Tatashin
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt

Now, that set_page_count() is not called from outside anymore and about
to be removed, init_page_count() is the only function that is going to
be used to unconditionally set _refcount, however it is restricted to set
it only to 1.

Make init_page_count() aligned with the other page_ref_*
functions by renaming it.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
---
 arch/m68k/mm/motorola.c  |  2 +-
 include/linux/mm.h       |  2 +-
 include/linux/page_ref.h | 10 +++++++---
 mm/page_alloc.c          |  2 +-
 4 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index ecbe948f4c1a..dd3b77d03d5c 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -133,7 +133,7 @@ void __init init_pointer_table(void *table, int type)
 
 	/* unreserve the page so it's possible to free that page */
 	__ClearPageReserved(PD_PAGE(dp));
-	init_page_count(PD_PAGE(dp));
+	page_ref_init(PD_PAGE(dp));
 
 	return;
 }
diff --git a/include/linux/mm.h b/include/linux/mm.h
index d211a06784d5..fae3b6ef66a5 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2451,7 +2451,7 @@ extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end);
 static inline void free_reserved_page(struct page *page)
 {
 	ClearPageReserved(page);
-	init_page_count(page);
+	page_ref_init(page);
 	__free_page(page);
 	adjust_managed_page_count(page, 1);
 }
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 03e21ce2f1bd..1af12a0d7ba1 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -107,10 +107,14 @@ static inline void folio_set_count(struct folio *folio, int v)
 }
 
 /*
- * Setup the page count before being freed into the page allocator for
- * the first time (boot or memory hotplug)
+ * Setup the page refcount to one before being freed into the page allocator.
+ * The memory might not be initialized and therefore there cannot be any
+ * assumptions about the current value of page->_refcount. This call should be
+ * done during boot when memory is being initialized, during memory hotplug
+ * when new memory is added, or when a previous reserved memory is unreserved
+ * this is the first time kernel take control of the given memory.
  */
-static inline void init_page_count(struct page *page)
+static inline void page_ref_init(struct page *page)
 {
 	set_page_count(page, 1);
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 13d989d62012..000c057a2d24 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1569,7 +1569,7 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn,
 {
 	mm_zero_struct_page(page);
 	set_page_links(page, zone, nid, pfn);
-	init_page_count(page);
+	page_ref_init(page);
 	page_mapcount_reset(page);
 	page_cpupid_reset_last(page);
 	page_kasan_tag_reset(page);
-- 
2.34.1.307.g9b7440fafd-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 6/9] mm: remove set_page_count()
  2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
                   ` (4 preceding siblings ...)
  2021-12-21 15:01 ` [PATCH v2 5/9] mm: rename init_page_count() -> page_ref_init() Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
  2021-12-21 15:01 ` [PATCH v2 7/9] mm: simplify page_ref_* functions Pasha Tatashin
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt

set_page_count() is dangerous because it resets _refcount to an
arbitrary value. Instead we now initialize _refcount to 1 only once,
and the rest of the time we are using add/dec/cmpxchg to have a
contiguous track of the counter.

Remove set_page_count() and add new tracing hooks to page_ref_init().

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 include/linux/page_ref.h        | 27 ++++++++-----------
 include/trace/events/page_ref.h | 46 ++++++++++++++++++++++++++++-----
 mm/debug_page_ref.c             |  8 +++---
 3 files changed, 54 insertions(+), 27 deletions(-)

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 1af12a0d7ba1..d7316881626c 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -7,7 +7,7 @@
 #include <linux/page-flags.h>
 #include <linux/tracepoint-defs.h>
 
-DECLARE_TRACEPOINT(page_ref_set);
+DECLARE_TRACEPOINT(page_ref_init);
 DECLARE_TRACEPOINT(page_ref_mod);
 DECLARE_TRACEPOINT(page_ref_mod_and_test);
 DECLARE_TRACEPOINT(page_ref_mod_and_return);
@@ -26,7 +26,7 @@ DECLARE_TRACEPOINT(page_ref_unfreeze);
  */
 #define page_ref_tracepoint_active(t) tracepoint_enabled(t)
 
-extern void __page_ref_set(struct page *page, int v);
+extern void __page_ref_init(struct page *page);
 extern void __page_ref_mod(struct page *page, int v);
 extern void __page_ref_mod_and_test(struct page *page, int v, int ret);
 extern void __page_ref_mod_and_return(struct page *page, int v, int ret);
@@ -38,7 +38,7 @@ extern void __page_ref_unfreeze(struct page *page, int v);
 
 #define page_ref_tracepoint_active(t) false
 
-static inline void __page_ref_set(struct page *page, int v)
+static inline void __page_ref_init(struct page *page)
 {
 }
 static inline void __page_ref_mod(struct page *page, int v)
@@ -94,18 +94,6 @@ static inline int page_count(const struct page *page)
 	return folio_ref_count(page_folio(page));
 }
 
-static inline void set_page_count(struct page *page, int v)
-{
-	atomic_set(&page->_refcount, v);
-	if (page_ref_tracepoint_active(page_ref_set))
-		__page_ref_set(page, v);
-}
-
-static inline void folio_set_count(struct folio *folio, int v)
-{
-	set_page_count(&folio->page, v);
-}
-
 /*
  * Setup the page refcount to one before being freed into the page allocator.
  * The memory might not be initialized and therefore there cannot be any
@@ -116,7 +104,14 @@ static inline void folio_set_count(struct folio *folio, int v)
  */
 static inline void page_ref_init(struct page *page)
 {
-	set_page_count(page, 1);
+	atomic_set(&page->_refcount, 1);
+	if (page_ref_tracepoint_active(page_ref_init))
+		__page_ref_init(page);
+}
+
+static inline void folio_ref_init(struct folio *folio)
+{
+	page_ref_init(&folio->page);
 }
 
 static inline int page_ref_add_return(struct page *page, int nr)
diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h
index 8a99c1cd417b..87551bb1df9e 100644
--- a/include/trace/events/page_ref.h
+++ b/include/trace/events/page_ref.h
@@ -10,6 +10,45 @@
 #include <linux/tracepoint.h>
 #include <trace/events/mmflags.h>
 
+DECLARE_EVENT_CLASS(page_ref_init_template,
+
+	TP_PROTO(struct page *page),
+
+	TP_ARGS(page),
+
+	TP_STRUCT__entry(
+		__field(unsigned long, pfn)
+		__field(unsigned long, flags)
+		__field(int, count)
+		__field(int, mapcount)
+		__field(void *, mapping)
+		__field(int, mt)
+		__field(int, val)
+	),
+
+	TP_fast_assign(
+		__entry->pfn = page_to_pfn(page);
+		__entry->flags = page->flags;
+		__entry->count = page_ref_count(page);
+		__entry->mapcount = page_mapcount(page);
+		__entry->mapping = page->mapping;
+		__entry->mt = get_pageblock_migratetype(page);
+	),
+
+	TP_printk("pfn=0x%lx flags=%s count=%d mapcount=%d mapping=%p mt=%d",
+		__entry->pfn,
+		show_page_flags(__entry->flags & PAGEFLAGS_MASK),
+		__entry->count,
+		__entry->mapcount, __entry->mapping, __entry->mt)
+);
+
+DEFINE_EVENT(page_ref_init_template, page_ref_init,
+
+	TP_PROTO(struct page *page),
+
+	TP_ARGS(page)
+);
+
 DECLARE_EVENT_CLASS(page_ref_mod_template,
 
 	TP_PROTO(struct page *page, int v),
@@ -44,13 +83,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_template,
 		__entry->val)
 );
 
-DEFINE_EVENT(page_ref_mod_template, page_ref_set,
-
-	TP_PROTO(struct page *page, int v),
-
-	TP_ARGS(page, v)
-);
-
 DEFINE_EVENT(page_ref_mod_template, page_ref_mod,
 
 	TP_PROTO(struct page *page, int v),
diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c
index f3b2c9d3ece2..e32149734122 100644
--- a/mm/debug_page_ref.c
+++ b/mm/debug_page_ref.c
@@ -5,12 +5,12 @@
 #define CREATE_TRACE_POINTS
 #include <trace/events/page_ref.h>
 
-void __page_ref_set(struct page *page, int v)
+void __page_ref_init(struct page *page)
 {
-	trace_page_ref_set(page, v);
+	trace_page_ref_init(page);
 }
-EXPORT_SYMBOL(__page_ref_set);
-EXPORT_TRACEPOINT_SYMBOL(page_ref_set);
+EXPORT_SYMBOL(__page_ref_init);
+EXPORT_TRACEPOINT_SYMBOL(page_ref_init);
 
 void __page_ref_mod(struct page *page, int v)
 {
-- 
2.34.1.307.g9b7440fafd-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 7/9] mm: simplify page_ref_* functions
  2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
                   ` (5 preceding siblings ...)
  2021-12-21 15:01 ` [PATCH v2 6/9] mm: remove set_page_count() Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
  2021-12-21 15:01 ` [PATCH v2 8/9] mm: do not use atomic_set_release in page_ref_unfreeze() Pasha Tatashin
  2021-12-21 15:01 ` [PATCH v2 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze() Pasha Tatashin
  8 siblings, 0 replies; 11+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt

Now, that we are using atomic_fetch* variants to add/sub/inc/dec page
_refcount, it makes sense to combined page_ref_* return and non return
functions.

Also remove some extra trace points for non-return  variants. This
improves the tracability by always recording the new _refcount value
after the modifications has occurred.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 include/linux/page_ref.h        | 102 +++++++++-----------------------
 include/trace/events/page_ref.h |  18 +-----
 mm/debug_page_ref.c             |  14 -----
 3 files changed, 31 insertions(+), 103 deletions(-)

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index d7316881626c..243fc60ae6c8 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -8,8 +8,6 @@
 #include <linux/tracepoint-defs.h>
 
 DECLARE_TRACEPOINT(page_ref_init);
-DECLARE_TRACEPOINT(page_ref_mod);
-DECLARE_TRACEPOINT(page_ref_mod_and_test);
 DECLARE_TRACEPOINT(page_ref_mod_and_return);
 DECLARE_TRACEPOINT(page_ref_mod_unless);
 DECLARE_TRACEPOINT(page_ref_freeze);
@@ -27,8 +25,6 @@ DECLARE_TRACEPOINT(page_ref_unfreeze);
 #define page_ref_tracepoint_active(t) tracepoint_enabled(t)
 
 extern void __page_ref_init(struct page *page);
-extern void __page_ref_mod(struct page *page, int v);
-extern void __page_ref_mod_and_test(struct page *page, int v, int ret);
 extern void __page_ref_mod_and_return(struct page *page, int v, int ret);
 extern void __page_ref_mod_unless(struct page *page, int v, int u);
 extern void __page_ref_freeze(struct page *page, int v, int ret);
@@ -41,12 +37,6 @@ extern void __page_ref_unfreeze(struct page *page, int v);
 static inline void __page_ref_init(struct page *page)
 {
 }
-static inline void __page_ref_mod(struct page *page, int v)
-{
-}
-static inline void __page_ref_mod_and_test(struct page *page, int v, int ret)
-{
-}
 static inline void __page_ref_mod_and_return(struct page *page, int v, int ret)
 {
 }
@@ -127,12 +117,7 @@ static inline int page_ref_add_return(struct page *page, int nr)
 
 static inline void page_ref_add(struct page *page, int nr)
 {
-	int old_val = atomic_fetch_add(nr, &page->_refcount);
-	int new_val = old_val + nr;
-
-	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod))
-		__page_ref_mod(page, nr);
+	page_ref_add_return(page, nr);
 }
 
 static inline void folio_ref_add(struct folio *folio, int nr)
@@ -140,30 +125,25 @@ static inline void folio_ref_add(struct folio *folio, int nr)
 	page_ref_add(&folio->page, nr);
 }
 
-static inline void page_ref_sub(struct page *page, int nr)
+static inline int page_ref_sub_return(struct page *page, int nr)
 {
 	int old_val = atomic_fetch_sub(nr, &page->_refcount);
 	int new_val = old_val - nr;
 
 	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod))
-		__page_ref_mod(page, -nr);
+	if (page_ref_tracepoint_active(page_ref_mod_and_return))
+		__page_ref_mod_and_return(page, -nr, new_val);
+	return new_val;
 }
 
-static inline void folio_ref_sub(struct folio *folio, int nr)
+static inline void page_ref_sub(struct page *page, int nr)
 {
-	page_ref_sub(&folio->page, nr);
+	page_ref_sub_return(page, nr);
 }
 
-static inline int page_ref_sub_return(struct page *page, int nr)
+static inline void folio_ref_sub(struct folio *folio, int nr)
 {
-	int old_val = atomic_fetch_sub(nr, &page->_refcount);
-	int new_val = old_val - nr;
-
-	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod_and_return))
-		__page_ref_mod_and_return(page, -nr, new_val);
-	return new_val;
+	page_ref_sub(&folio->page, nr);
 }
 
 static inline int folio_ref_sub_return(struct folio *folio, int nr)
@@ -171,14 +151,20 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr)
 	return page_ref_sub_return(&folio->page, nr);
 }
 
-static inline void page_ref_inc(struct page *page)
+static inline int page_ref_inc_return(struct page *page)
 {
 	int old_val = atomic_fetch_inc(&page->_refcount);
 	int new_val = old_val + 1;
 
 	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod))
-		__page_ref_mod(page, 1);
+	if (page_ref_tracepoint_active(page_ref_mod_and_return))
+		__page_ref_mod_and_return(page, 1, new_val);
+	return new_val;
+}
+
+static inline void page_ref_inc(struct page *page)
+{
+	page_ref_inc_return(page);
 }
 
 static inline void folio_ref_inc(struct folio *folio)
@@ -186,14 +172,20 @@ static inline void folio_ref_inc(struct folio *folio)
 	page_ref_inc(&folio->page);
 }
 
-static inline void page_ref_dec(struct page *page)
+static inline int page_ref_dec_return(struct page *page)
 {
 	int old_val = atomic_fetch_dec(&page->_refcount);
 	int new_val = old_val - 1;
 
 	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod))
-		__page_ref_mod(page, -1);
+	if (page_ref_tracepoint_active(page_ref_mod_and_return))
+		__page_ref_mod_and_return(page, -1, new_val);
+	return new_val;
+}
+
+static inline void page_ref_dec(struct page *page)
+{
+	page_ref_dec_return(page);
 }
 
 static inline void folio_ref_dec(struct folio *folio)
@@ -203,14 +195,7 @@ static inline void folio_ref_dec(struct folio *folio)
 
 static inline int page_ref_sub_and_test(struct page *page, int nr)
 {
-	int old_val = atomic_fetch_sub(nr, &page->_refcount);
-	int new_val = old_val - nr;
-	int ret = new_val == 0;
-
-	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod_and_test))
-		__page_ref_mod_and_test(page, -nr, ret);
-	return ret;
+	return page_ref_sub_return(page, nr) == 0;
 }
 
 static inline int folio_ref_sub_and_test(struct folio *folio, int nr)
@@ -218,17 +203,6 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr)
 	return page_ref_sub_and_test(&folio->page, nr);
 }
 
-static inline int page_ref_inc_return(struct page *page)
-{
-	int old_val = atomic_fetch_inc(&page->_refcount);
-	int new_val = old_val + 1;
-
-	VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod_and_return))
-		__page_ref_mod_and_return(page, 1, new_val);
-	return new_val;
-}
-
 static inline int folio_ref_inc_return(struct folio *folio)
 {
 	return page_ref_inc_return(&folio->page);
@@ -236,14 +210,7 @@ static inline int folio_ref_inc_return(struct folio *folio)
 
 static inline int page_ref_dec_and_test(struct page *page)
 {
-	int old_val = atomic_fetch_dec(&page->_refcount);
-	int new_val = old_val - 1;
-	int ret = new_val == 0;
-
-	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod_and_test))
-		__page_ref_mod_and_test(page, -1, ret);
-	return ret;
+	return page_ref_dec_return(page) == 0;
 }
 
 static inline int folio_ref_dec_and_test(struct folio *folio)
@@ -251,17 +218,6 @@ static inline int folio_ref_dec_and_test(struct folio *folio)
 	return page_ref_dec_and_test(&folio->page);
 }
 
-static inline int page_ref_dec_return(struct page *page)
-{
-	int old_val = atomic_fetch_dec(&page->_refcount);
-	int new_val = old_val - 1;
-
-	VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
-	if (page_ref_tracepoint_active(page_ref_mod_and_return))
-		__page_ref_mod_and_return(page, -1, new_val);
-	return new_val;
-}
-
 static inline int folio_ref_dec_return(struct folio *folio)
 {
 	return page_ref_dec_return(&folio->page);
diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h
index 87551bb1df9e..35cd795aa7c6 100644
--- a/include/trace/events/page_ref.h
+++ b/include/trace/events/page_ref.h
@@ -49,7 +49,7 @@ DEFINE_EVENT(page_ref_init_template, page_ref_init,
 	TP_ARGS(page)
 );
 
-DECLARE_EVENT_CLASS(page_ref_mod_template,
+DECLARE_EVENT_CLASS(page_ref_unfreeze_template,
 
 	TP_PROTO(struct page *page, int v),
 
@@ -83,13 +83,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_template,
 		__entry->val)
 );
 
-DEFINE_EVENT(page_ref_mod_template, page_ref_mod,
-
-	TP_PROTO(struct page *page, int v),
-
-	TP_ARGS(page, v)
-);
-
 DECLARE_EVENT_CLASS(page_ref_mod_and_test_template,
 
 	TP_PROTO(struct page *page, int v, int ret),
@@ -126,13 +119,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_and_test_template,
 		__entry->val, __entry->ret)
 );
 
-DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_test,
-
-	TP_PROTO(struct page *page, int v, int ret),
-
-	TP_ARGS(page, v, ret)
-);
-
 DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_return,
 
 	TP_PROTO(struct page *page, int v, int ret),
@@ -154,7 +140,7 @@ DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_freeze,
 	TP_ARGS(page, v, ret)
 );
 
-DEFINE_EVENT(page_ref_mod_template, page_ref_unfreeze,
+DEFINE_EVENT(page_ref_unfreeze_template, page_ref_unfreeze,
 
 	TP_PROTO(struct page *page, int v),
 
diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c
index e32149734122..1de9d93cca25 100644
--- a/mm/debug_page_ref.c
+++ b/mm/debug_page_ref.c
@@ -12,20 +12,6 @@ void __page_ref_init(struct page *page)
 EXPORT_SYMBOL(__page_ref_init);
 EXPORT_TRACEPOINT_SYMBOL(page_ref_init);
 
-void __page_ref_mod(struct page *page, int v)
-{
-	trace_page_ref_mod(page, v);
-}
-EXPORT_SYMBOL(__page_ref_mod);
-EXPORT_TRACEPOINT_SYMBOL(page_ref_mod);
-
-void __page_ref_mod_and_test(struct page *page, int v, int ret)
-{
-	trace_page_ref_mod_and_test(page, v, ret);
-}
-EXPORT_SYMBOL(__page_ref_mod_and_test);
-EXPORT_TRACEPOINT_SYMBOL(page_ref_mod_and_test);
-
 void __page_ref_mod_and_return(struct page *page, int v, int ret)
 {
 	trace_page_ref_mod_and_return(page, v, ret);
-- 
2.34.1.307.g9b7440fafd-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 8/9] mm: do not use atomic_set_release in page_ref_unfreeze()
  2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
                   ` (6 preceding siblings ...)
  2021-12-21 15:01 ` [PATCH v2 7/9] mm: simplify page_ref_* functions Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
  2021-12-21 15:01 ` [PATCH v2 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze() Pasha Tatashin
  8 siblings, 0 replies; 11+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt

In we set the old _refcount value after verifying that the old value was
indeed 0.

VM_BUG_ON_PAGE(page_count(page) != 0, page);
< the _refcount may change here>
atomic_set_release(&page->_refcount, count);

To avoid the smal gap where _refcount may change lets verify the time
of the _refcount at the time of the set operation.

Use atomic_xchg_release() and at the set time verify that the value
was 0.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 include/linux/page_ref.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 243fc60ae6c8..9efabeff4e06 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -322,10 +322,9 @@ static inline int folio_ref_freeze(struct folio *folio, int count)
 
 static inline void page_ref_unfreeze(struct page *page, int count)
 {
-	VM_BUG_ON_PAGE(page_count(page) != 0, page);
-	VM_BUG_ON(count == 0);
+	int old_val = atomic_xchg_release(&page->_refcount, count);
 
-	atomic_set_release(&page->_refcount, count);
+	VM_BUG_ON_PAGE(count == 0 || old_val != 0, page);
 	if (page_ref_tracepoint_active(page_ref_unfreeze))
 		__page_ref_unfreeze(page, count);
 }
-- 
2.34.1.307.g9b7440fafd-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze().
  2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
                   ` (7 preceding siblings ...)
  2021-12-21 15:01 ` [PATCH v2 8/9] mm: do not use atomic_set_release in page_ref_unfreeze() Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
  8 siblings, 0 replies; 11+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
  To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
	anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
	vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
	songmuchun, weixugc, gthelen, rientjes, pjt

page_ref_freeze and page_ref_unfreeze are designed to be used as a pair.
They protect critical sections where struct page can be modified.

page_ref_unfreeze() is protected by _release() atomic operation, but
page_ref_freeze() is not as it is assumed that cmpxch provides the full
barrier.

Instead, use the appropriate atomic_cmpxchg_acquire() to ensure that
memory model is excplicitly followed.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 include/linux/page_ref.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 9efabeff4e06..45be731d8919 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -308,7 +308,8 @@ static inline bool folio_try_get_rcu(struct folio *folio)
 
 static inline int page_ref_freeze(struct page *page, int count)
 {
-	int ret = likely(atomic_cmpxchg(&page->_refcount, count, 0) == count);
+	int old_val = atomic_cmpxchg_acquire(&page->_refcount, count, 0);
+	int ret = likely(old_val == count);
 
 	if (page_ref_tracepoint_active(page_ref_freeze))
 		__page_ref_freeze(page, count, ret);
-- 
2.34.1.307.g9b7440fafd-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/9] mm: remove set_page_count() from page_frag_alloc_align
@ 2022-01-02 12:18 kernel test robot
  0 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2022-01-02 12:18 UTC (permalink / raw)
  To: kbuild

[-- Attachment #1: Type: text/plain, Size: 18850 bytes --]

CC: llvm(a)lists.linux.dev
CC: kbuild-all(a)lists.01.org
In-Reply-To: <20211221150140.988298-4-pasha.tatashin@soleen.com>
References: <20211221150140.988298-4-pasha.tatashin@soleen.com>
TO: Pasha Tatashin <pasha.tatashin@soleen.com>
TO: pasha.tatashin(a)soleen.com
TO: linux-kernel(a)vger.kernel.org
TO: linux-mm(a)kvack.org
TO: linux-m68k(a)lists.linux-m68k.org
TO: anshuman.khandual(a)arm.com
TO: willy(a)infradead.org
TO: akpm(a)linux-foundation.org
TO: william.kucharski(a)oracle.com
TO: mike.kravetz(a)oracle.com
TO: vbabka(a)suse.cz

Hi Pasha,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on hnaz-mm/master]
[also build test WARNING on rostedt-trace/for-next geert-m68k/for-next linux/master linus/master v5.16-rc7 next-20211224]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Pasha-Tatashin/Hardening-page-_refcount/20211221-230439
base:   https://github.com/hnaz/linux-mm master
:::::: branch date: 12 days ago
:::::: commit date: 12 days ago
config: x86_64-randconfig-c007-20211231 (https://download.01.org/0day-ci/archive/20220102/202201022054.ufnNk7Fu-lkp(a)intel.com/config)
compiler: clang version 14.0.0 (https://github.com/llvm/llvm-project 7cd109b92c72855937273a6c8ab19016fbe27d33)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/2add304c6e5eb6206507d871ccfd11349cc32586
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Pasha-Tatashin/Hardening-page-_refcount/20211221-230439
        git checkout 2add304c6e5eb6206507d871ccfd11349cc32586
        # save the config file to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 clang-analyzer 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


clang-analyzer warnings: (new ones prefixed by >>)
   mm/page_alloc.c:5287:3: note: Taking false branch
                   if (unlikely(!page)) {
                   ^
   mm/page_alloc.c:5296:7: note: Assuming 'page_list' is null
                   if (page_list)
                       ^~~~~~~~~
   mm/page_alloc.c:5296:3: note: Taking false branch
                   if (page_list)
                   ^
   mm/page_alloc.c:5299:29: note: Array access (from variable 'page_array') results in a null pointer dereference
                           page_array[nr_populated] = page;
                           ~~~~~~~~~~               ^
   mm/page_alloc.c:5320:29: warning: Array access (from variable 'page_array') results in a null pointer dereference [clang-analyzer-core.NullDereference]
                           page_array[nr_populated] = page;
                           ~~~~~~~~~~               ^
   mm/page_alloc.c:5205:9: note: Assuming 'page_array' is null
           while (page_array && nr_populated < nr_pages && page_array[nr_populated])
                  ^~~~~~~~~~
   mm/page_alloc.c:5205:20: note: Left side of '&&' is false
           while (page_array && nr_populated < nr_pages && page_array[nr_populated])
                             ^
   mm/page_alloc.c:5209:15: note: Assuming 'nr_pages' is > 0
           if (unlikely(nr_pages <= 0))
                        ^
   include/linux/compiler.h:78:42: note: expanded from macro 'unlikely'
   # define unlikely(x)    __builtin_expect(!!(x), 0)
                                               ^
   mm/page_alloc.c:5209:2: note: Taking false branch
           if (unlikely(nr_pages <= 0))
           ^
   mm/page_alloc.c:5213:15: note: 'page_array' is null
           if (unlikely(page_array && nr_pages - nr_populated == 0))
                        ^
   include/linux/compiler.h:78:42: note: expanded from macro 'unlikely'
   # define unlikely(x)    __builtin_expect(!!(x), 0)
                                               ^
   mm/page_alloc.c:5213:26: note: Left side of '&&' is false
           if (unlikely(page_array && nr_pages - nr_populated == 0))
                                   ^
   mm/page_alloc.c:5213:2: note: Taking false branch
           if (unlikely(page_array && nr_pages - nr_populated == 0))
           ^
   mm/page_alloc.c:5217:6: note: Calling 'memcg_kmem_enabled'
           if (memcg_kmem_enabled() && (gfp & __GFP_ACCOUNT))
               ^~~~~~~~~~~~~~~~~~~~
   include/linux/memcontrol.h:1714:9: note: Left side of '&&' is false
           return static_branch_likely(&memcg_kmem_enabled_key);
                  ^
   include/linux/jump_label.h:507:49: note: expanded from macro 'static_branch_likely'
   #define static_branch_likely(x)         likely_notrace(static_key_enabled(&(x)->key))
                                                          ^
   include/linux/jump_label.h:416:67: note: expanded from macro 'static_key_enabled'
           if (!__builtin_types_compatible_p(typeof(*x), struct static_key) &&     \
                                                                            ^
   include/linux/memcontrol.h:1714:9: note: Assuming the condition is true
           return static_branch_likely(&memcg_kmem_enabled_key);
                  ^
   include/linux/jump_label.h:507:49: note: expanded from macro 'static_branch_likely'
   #define static_branch_likely(x)         likely_notrace(static_key_enabled(&(x)->key))
                                           ~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/jump_label.h:420:2: note: expanded from macro 'static_key_enabled'
           static_key_count((struct static_key *)x) > 0;                           \
           ^
   include/linux/compiler.h:79:35: note: expanded from macro 'likely_notrace'
   # define likely_notrace(x)      likely(x)
                                   ~~~~~~~^~
   include/linux/compiler.h:77:40: note: expanded from macro 'likely'
   # define likely(x)      __builtin_expect(!!(x), 1)
                                               ^
   include/linux/memcontrol.h:1714:2: note: Returning the value 1, which participates in a condition later
           return static_branch_likely(&memcg_kmem_enabled_key);
           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   mm/page_alloc.c:5217:6: note: Returning from 'memcg_kmem_enabled'
           if (memcg_kmem_enabled() && (gfp & __GFP_ACCOUNT))
               ^~~~~~~~~~~~~~~~~~~~
   mm/page_alloc.c:5217:6: note: Left side of '&&' is true
   mm/page_alloc.c:5217:31: note: Assuming the condition is true
           if (memcg_kmem_enabled() && (gfp & __GFP_ACCOUNT))
                                        ^~~~~~~~~~~~~~~~~~~
   mm/page_alloc.c:5217:2: note: Taking true branch
           if (memcg_kmem_enabled() && (gfp & __GFP_ACCOUNT))
           ^
   mm/page_alloc.c:5218:3: note: Control jumps to line 5315
                   goto failed;
                   ^
   mm/page_alloc.c:5316:6: note: Assuming 'page' is non-null
           if (page) {
               ^~~~
   mm/page_alloc.c:5316:2: note: Taking true branch
           if (page) {
           ^
   mm/page_alloc.c:5317:7: note: Assuming 'page_list' is null
                   if (page_list)
                       ^~~~~~~~~
   mm/page_alloc.c:5317:3: note: Taking false branch
                   if (page_list)
                   ^
   mm/page_alloc.c:5320:29: note: Array access (from variable 'page_array') results in a null pointer dereference
                           page_array[nr_populated] = page;
                           ~~~~~~~~~~               ^
>> mm/page_alloc.c:5559:3: warning: Value stored to 'refcnt' is never read [clang-analyzer-deadcode.DeadStores]
                   refcnt = page_ref_add_return(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
                   ^        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   mm/page_alloc.c:5559:3: note: Value stored to 'refcnt' is never read
                   refcnt = page_ref_add_return(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
                   ^        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   mm/page_alloc.c:8380:3: warning: Division by zero [clang-analyzer-core.DivideZero]
                   do_div(tmp, lowmem_pages);
                   ^
   include/asm-generic/div64.h:48:26: note: expanded from macro 'do_div'
           __rem = ((uint64_t)(n)) % __base;                       \
                                   ^
   mm/page_alloc.c:8533:6: note: Assuming 'rc' is 0
           if (rc)
               ^~
   mm/page_alloc.c:8533:2: note: Taking false branch
           if (rc)
           ^
   mm/page_alloc.c:8536:6: note: Assuming 'write' is not equal to 0
           if (write)
               ^~~~~
   mm/page_alloc.c:8536:2: note: Taking true branch
           if (write)
           ^
   mm/page_alloc.c:8537:3: note: Calling 'setup_per_zone_wmarks'
                   setup_per_zone_wmarks();
                   ^~~~~~~~~~~~~~~~~~~~~~~
   mm/page_alloc.c:8437:2: note: Calling '__setup_per_zone_wmarks'
           __setup_per_zone_wmarks();
           ^~~~~~~~~~~~~~~~~~~~~~~~~
   mm/page_alloc.c:8365:2: note: 'lowmem_pages' initialized to 0
           unsigned long lowmem_pages = 0;
           ^~~~~~~~~~~~~~~~~~~~~~~~~~
   mm/page_alloc.c:8370:2: note: Loop condition is false. Execution continues on line 8375
           for_each_zone(zone) {
           ^
   include/linux/mmzone.h:1122:2: note: expanded from macro 'for_each_zone'
           for (zone = (first_online_pgdat())->node_zones; \
           ^
   mm/page_alloc.c:8375:2: note: Loop condition is true.  Entering loop body
           for_each_zone(zone) {
           ^
   include/linux/mmzone.h:1122:2: note: expanded from macro 'for_each_zone'
           for (zone = (first_online_pgdat())->node_zones; \
           ^
   mm/page_alloc.c:8378:3: note: Loop condition is false.  Exiting loop
                   spin_lock_irqsave(&zone->lock, flags);
                   ^
   include/linux/spinlock.h:397:2: note: expanded from macro 'spin_lock_irqsave'
           raw_spin_lock_irqsave(spinlock_check(lock), flags);     \
           ^
   include/linux/spinlock.h:253:2: note: expanded from macro 'raw_spin_lock_irqsave'
           do {                                            \
           ^
   mm/page_alloc.c:8378:3: note: Loop condition is false.  Exiting loop
                   spin_lock_irqsave(&zone->lock, flags);
                   ^
   include/linux/spinlock.h:395:43: note: expanded from macro 'spin_lock_irqsave'
   #define spin_lock_irqsave(lock, flags)                          \
                                                                   ^
   mm/page_alloc.c:8380:3: note: '__base' initialized to 0
                   do_div(tmp, lowmem_pages);
                   ^
   include/asm-generic/div64.h:46:2: note: expanded from macro 'do_div'
           uint32_t __base = (base);                               \
           ^~~~~~~~~~~~~~~
   mm/page_alloc.c:8380:3: note: Division by zero
                   do_div(tmp, lowmem_pages);
                   ^
   include/asm-generic/div64.h:48:26: note: expanded from macro 'do_div'
           __rem = ((uint64_t)(n)) % __base;                       \
                   ~~~~~~~~~~~~~~~~^~~~~~~~
   Suppressed 12 warnings (11 in non-user code, 1 with check filters).
   Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well.
   15 warnings generated.
   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c:1489:2: warning: Value stored to 'r' is never read [clang-analyzer-deadcode.DeadStores]
           r = amdgpu_atomfirmware_get_vram_info(adev,
           ^   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c:1489:2: note: Value stored to 'r' is never read
           r = amdgpu_atomfirmware_get_vram_info(adev,
           ^   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   Suppressed 14 warnings (14 in non-user code).
   Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well.
   14 warnings generated.
   Suppressed 14 warnings (14 in non-user code).
   Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well.
   14 warnings generated.
   Suppressed 14 warnings (14 in non-user code).
   Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well.
   14 warnings generated.
   Suppressed 14 warnings (14 in non-user code).
   Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well.
   14 warnings generated.
   Suppressed 14 warnings (14 in non-user code).
   Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well.
   15 warnings generated.
   drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c:852:3: warning: Value stored to 'r' is never read [clang-analyzer-deadcode.DeadStores]
                   r = amdgpu_atomfirmware_get_vram_info(adev,
                   ^   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c:852:3: note: Value stored to 'r' is never read
                   r = amdgpu_atomfirmware_get_vram_info(adev,

vim +/refcnt +5559 mm/page_alloc.c

44fdffd70504c1 Alexander Duyck 2016-12-14  5511  
b358e2122b9d7a Kevin Hao       2021-02-04  5512  void *page_frag_alloc_align(struct page_frag_cache *nc,
b358e2122b9d7a Kevin Hao       2021-02-04  5513  		      unsigned int fragsz, gfp_t gfp_mask,
b358e2122b9d7a Kevin Hao       2021-02-04  5514  		      unsigned int align_mask)
b63ae8ca096dfd Alexander Duyck 2015-05-06  5515  {
b63ae8ca096dfd Alexander Duyck 2015-05-06  5516  	unsigned int size = PAGE_SIZE;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5517  	struct page *page;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5518  	int offset;
2add304c6e5eb6 Pasha Tatashin  2021-12-21  5519  	int refcnt;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5520  
b63ae8ca096dfd Alexander Duyck 2015-05-06  5521  	if (unlikely(!nc->va)) {
b63ae8ca096dfd Alexander Duyck 2015-05-06  5522  refill:
2976db8018532b Alexander Duyck 2017-01-10  5523  		page = __page_frag_cache_refill(nc, gfp_mask);
b63ae8ca096dfd Alexander Duyck 2015-05-06  5524  		if (!page)
b63ae8ca096dfd Alexander Duyck 2015-05-06  5525  			return NULL;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5526  
b63ae8ca096dfd Alexander Duyck 2015-05-06  5527  #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
b63ae8ca096dfd Alexander Duyck 2015-05-06  5528  		/* if size can vary use size else just use PAGE_SIZE */
b63ae8ca096dfd Alexander Duyck 2015-05-06  5529  		size = nc->size;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5530  #endif
b63ae8ca096dfd Alexander Duyck 2015-05-06  5531  		/* Even if we own the page, we do not use atomic_set().
b63ae8ca096dfd Alexander Duyck 2015-05-06  5532  		 * This would break get_page_unless_zero() users.
b63ae8ca096dfd Alexander Duyck 2015-05-06  5533  		 */
8644772637deb1 Alexander Duyck 2019-02-15  5534  		page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
b63ae8ca096dfd Alexander Duyck 2015-05-06  5535  
b63ae8ca096dfd Alexander Duyck 2015-05-06  5536  		/* reset page count bias and offset to start of new frag */
2f064f3485cd29 Michal Hocko    2015-08-21  5537  		nc->pfmemalloc = page_is_pfmemalloc(page);
8644772637deb1 Alexander Duyck 2019-02-15  5538  		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5539  		nc->offset = size;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5540  	}
b63ae8ca096dfd Alexander Duyck 2015-05-06  5541  
b63ae8ca096dfd Alexander Duyck 2015-05-06  5542  	offset = nc->offset - fragsz;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5543  	if (unlikely(offset < 0)) {
b63ae8ca096dfd Alexander Duyck 2015-05-06  5544  		page = virt_to_page(nc->va);
b63ae8ca096dfd Alexander Duyck 2015-05-06  5545  
fe896d1878949e Joonsoo Kim     2016-03-17  5546  		if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
b63ae8ca096dfd Alexander Duyck 2015-05-06  5547  			goto refill;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5548  
d8c19014bba8f5 Dongli Zhang    2020-11-15  5549  		if (unlikely(nc->pfmemalloc)) {
d8c19014bba8f5 Dongli Zhang    2020-11-15  5550  			free_the_page(page, compound_order(page));
d8c19014bba8f5 Dongli Zhang    2020-11-15  5551  			goto refill;
d8c19014bba8f5 Dongli Zhang    2020-11-15  5552  		}
d8c19014bba8f5 Dongli Zhang    2020-11-15  5553  
b63ae8ca096dfd Alexander Duyck 2015-05-06  5554  #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
b63ae8ca096dfd Alexander Duyck 2015-05-06  5555  		/* if size can vary use size else just use PAGE_SIZE */
b63ae8ca096dfd Alexander Duyck 2015-05-06  5556  		size = nc->size;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5557  #endif
2add304c6e5eb6 Pasha Tatashin  2021-12-21  5558  		/* page count is 0, set it to PAGE_FRAG_CACHE_MAX_SIZE + 1 */
2add304c6e5eb6 Pasha Tatashin  2021-12-21 @5559  		refcnt = page_ref_add_return(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
2add304c6e5eb6 Pasha Tatashin  2021-12-21  5560  		VM_BUG_ON_PAGE(refcnt != PAGE_FRAG_CACHE_MAX_SIZE + 1, page);
b63ae8ca096dfd Alexander Duyck 2015-05-06  5561  
b63ae8ca096dfd Alexander Duyck 2015-05-06  5562  		/* reset page count bias and offset to start of new frag */
8644772637deb1 Alexander Duyck 2019-02-15  5563  		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5564  		offset = size - fragsz;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5565  	}
b63ae8ca096dfd Alexander Duyck 2015-05-06  5566  
b63ae8ca096dfd Alexander Duyck 2015-05-06  5567  	nc->pagecnt_bias--;
b358e2122b9d7a Kevin Hao       2021-02-04  5568  	offset &= align_mask;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5569  	nc->offset = offset;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5570  
b63ae8ca096dfd Alexander Duyck 2015-05-06  5571  	return nc->va + offset;
b63ae8ca096dfd Alexander Duyck 2015-05-06  5572  }
b358e2122b9d7a Kevin Hao       2021-02-04  5573  EXPORT_SYMBOL(page_frag_alloc_align);
b63ae8ca096dfd Alexander Duyck 2015-05-06  5574  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-01-02 12:18 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 2/9] mm: Avoid using set_page_count() in set_page_recounted() Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 3/9] mm: remove set_page_count() from page_frag_alloc_align Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 4/9] mm: avoid using set_page_count() when pages are freed into allocator Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 5/9] mm: rename init_page_count() -> page_ref_init() Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 6/9] mm: remove set_page_count() Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 7/9] mm: simplify page_ref_* functions Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 8/9] mm: do not use atomic_set_release in page_ref_unfreeze() Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze() Pasha Tatashin
2022-01-02 12:18 [PATCH v2 3/9] mm: remove set_page_count() from page_frag_alloc_align kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.