* [PATCH v2 0/9] Hardening page _refcount
@ 2021-12-21 15:01 Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
` (8 more replies)
0 siblings, 9 replies; 10+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
songmuchun, weixugc, gthelen, rientjes, pjt
From: Pasha Tatashin <tatashin@google.com>
Changelog:
v2:
- As suggested by Matthew Wilcox removed "mm: page_ref_add_unless()
does not trace 'u' argument" patch as page_ref_add_unless is going
away.
v1:
- sync with the latest linux-next
RFCv2:
- use the "fetch" variant instead of "return" of atomic instructions
- allow negative values, as we are using all 32-bits of _refcount.
It is hard to root cause _refcount problems, because they usually
manifest after the damage has occurred. Yet, they can lead to
catastrophic failures such memory corruptions. There were a number
of refcount related issues discovered recently [1], [2], [3].
Improve debugability by adding more checks that ensure that
page->_refcount never turns negative (i.e. double free does not
happen, or free after freeze etc).
- Check for overflow and underflow right from the functions that
modify _refcount
- Remove set_page_count(), so we do not unconditionally overwrite
_refcount with an unrestrained value
- Trace return values in all functions that modify _refcount
Applies against next-20211221.
Previous verions:
v1: https://lore.kernel.org/all/20211208203544.2297121-1-pasha.tatashin@soleen.com
RFCv2: https://lore.kernel.org/all/20211117012059.141450-1-pasha.tatashin@soleen.com
RFCv1: https://lore.kernel.org/all/20211026173822.502506-1-pasha.tatashin@soleen.com
[1] https://lore.kernel.org/all/xr9335nxwc5y.fsf@gthelen2.svl.corp.google.com
[2] https://lore.kernel.org/all/1582661774-30925-2-git-send-email-akaher@vmware.com
[3] https://lore.kernel.org/all/20210622021423.154662-3-mike.kravetz@oracle.com
Pasha Tatashin (9):
mm: add overflow and underflow checks for page->_refcount
mm: Avoid using set_page_count() in set_page_recounted()
mm: remove set_page_count() from page_frag_alloc_align
mm: avoid using set_page_count() when pages are freed into allocator
mm: rename init_page_count() -> page_ref_init()
mm: remove set_page_count()
mm: simplify page_ref_* functions
mm: do not use atomic_set_release in page_ref_unfreeze()
mm: use atomic_cmpxchg_acquire in page_ref_freeze().
arch/m68k/mm/motorola.c | 2 +-
include/linux/mm.h | 2 +-
include/linux/page_ref.h | 149 +++++++++++++++-----------------
include/trace/events/page_ref.h | 58 ++++++++-----
mm/debug_page_ref.c | 22 +----
mm/internal.h | 6 +-
mm/page_alloc.c | 19 ++--
7 files changed, 132 insertions(+), 126 deletions(-)
--
2.34.1.307.g9b7440fafd-goog
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2 1/9] mm: add overflow and underflow checks for page->_refcount
2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 2/9] mm: Avoid using set_page_count() in set_page_recounted() Pasha Tatashin
` (7 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
songmuchun, weixugc, gthelen, rientjes, pjt
The problems with page->_refcount are hard to debug, because usually
when they are detected, the damage has occurred a long time ago. Yet,
the problems with invalid page refcount may be catastrophic and lead to
memory corruptions.
Reduce the scope of when the _refcount problems manifest themselves by
adding checks for underflows and overflows into functions that modify
_refcount.
Use atomic_fetch_* functions to get the old values of the _refcount,
and use it to check for overflow/underflow.
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
include/linux/page_ref.h | 59 +++++++++++++++++++++++++++++-----------
1 file changed, 43 insertions(+), 16 deletions(-)
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 2e677e6ad09f..fe4864f7f69c 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -117,7 +117,10 @@ static inline void init_page_count(struct page *page)
static inline void page_ref_add(struct page *page, int nr)
{
- atomic_add(nr, &page->_refcount);
+ int old_val = atomic_fetch_add(nr, &page->_refcount);
+ int new_val = old_val + nr;
+
+ VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
if (page_ref_tracepoint_active(page_ref_mod))
__page_ref_mod(page, nr);
}
@@ -129,7 +132,10 @@ static inline void folio_ref_add(struct folio *folio, int nr)
static inline void page_ref_sub(struct page *page, int nr)
{
- atomic_sub(nr, &page->_refcount);
+ int old_val = atomic_fetch_sub(nr, &page->_refcount);
+ int new_val = old_val - nr;
+
+ VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
if (page_ref_tracepoint_active(page_ref_mod))
__page_ref_mod(page, -nr);
}
@@ -141,11 +147,13 @@ static inline void folio_ref_sub(struct folio *folio, int nr)
static inline int page_ref_sub_return(struct page *page, int nr)
{
- int ret = atomic_sub_return(nr, &page->_refcount);
+ int old_val = atomic_fetch_sub(nr, &page->_refcount);
+ int new_val = old_val - nr;
+ VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
if (page_ref_tracepoint_active(page_ref_mod_and_return))
- __page_ref_mod_and_return(page, -nr, ret);
- return ret;
+ __page_ref_mod_and_return(page, -nr, new_val);
+ return new_val;
}
static inline int folio_ref_sub_return(struct folio *folio, int nr)
@@ -155,7 +163,10 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr)
static inline void page_ref_inc(struct page *page)
{
- atomic_inc(&page->_refcount);
+ int old_val = atomic_fetch_inc(&page->_refcount);
+ int new_val = old_val + 1;
+
+ VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
if (page_ref_tracepoint_active(page_ref_mod))
__page_ref_mod(page, 1);
}
@@ -167,7 +178,10 @@ static inline void folio_ref_inc(struct folio *folio)
static inline void page_ref_dec(struct page *page)
{
- atomic_dec(&page->_refcount);
+ int old_val = atomic_fetch_dec(&page->_refcount);
+ int new_val = old_val - 1;
+
+ VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
if (page_ref_tracepoint_active(page_ref_mod))
__page_ref_mod(page, -1);
}
@@ -179,8 +193,11 @@ static inline void folio_ref_dec(struct folio *folio)
static inline int page_ref_sub_and_test(struct page *page, int nr)
{
- int ret = atomic_sub_and_test(nr, &page->_refcount);
+ int old_val = atomic_fetch_sub(nr, &page->_refcount);
+ int new_val = old_val - nr;
+ int ret = new_val == 0;
+ VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
if (page_ref_tracepoint_active(page_ref_mod_and_test))
__page_ref_mod_and_test(page, -nr, ret);
return ret;
@@ -193,11 +210,13 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr)
static inline int page_ref_inc_return(struct page *page)
{
- int ret = atomic_inc_return(&page->_refcount);
+ int old_val = atomic_fetch_inc(&page->_refcount);
+ int new_val = old_val + 1;
+ VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
if (page_ref_tracepoint_active(page_ref_mod_and_return))
- __page_ref_mod_and_return(page, 1, ret);
- return ret;
+ __page_ref_mod_and_return(page, 1, new_val);
+ return new_val;
}
static inline int folio_ref_inc_return(struct folio *folio)
@@ -207,8 +226,11 @@ static inline int folio_ref_inc_return(struct folio *folio)
static inline int page_ref_dec_and_test(struct page *page)
{
- int ret = atomic_dec_and_test(&page->_refcount);
+ int old_val = atomic_fetch_dec(&page->_refcount);
+ int new_val = old_val - 1;
+ int ret = new_val == 0;
+ VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
if (page_ref_tracepoint_active(page_ref_mod_and_test))
__page_ref_mod_and_test(page, -1, ret);
return ret;
@@ -221,11 +243,13 @@ static inline int folio_ref_dec_and_test(struct folio *folio)
static inline int page_ref_dec_return(struct page *page)
{
- int ret = atomic_dec_return(&page->_refcount);
+ int old_val = atomic_fetch_dec(&page->_refcount);
+ int new_val = old_val - 1;
+ VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
if (page_ref_tracepoint_active(page_ref_mod_and_return))
- __page_ref_mod_and_return(page, -1, ret);
- return ret;
+ __page_ref_mod_and_return(page, -1, new_val);
+ return new_val;
}
static inline int folio_ref_dec_return(struct folio *folio)
@@ -235,8 +259,11 @@ static inline int folio_ref_dec_return(struct folio *folio)
static inline bool page_ref_add_unless(struct page *page, int nr, int u)
{
- bool ret = atomic_add_unless(&page->_refcount, nr, u);
+ int old_val = atomic_fetch_add_unless(&page->_refcount, nr, u);
+ int new_val = old_val + nr;
+ int ret = old_val != u;
+ VM_BUG_ON_PAGE(ret && (unsigned int)new_val < (unsigned int)old_val, page);
if (page_ref_tracepoint_active(page_ref_mod_unless))
__page_ref_mod_unless(page, nr, ret);
return ret;
--
2.34.1.307.g9b7440fafd-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 2/9] mm: Avoid using set_page_count() in set_page_recounted()
2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 3/9] mm: remove set_page_count() from page_frag_alloc_align Pasha Tatashin
` (6 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
songmuchun, weixugc, gthelen, rientjes, pjt
set_page_refcounted() converts a non-refcounted page that has
(page->_refcount == 0) into a refcounted page by setting _refcount to
1.
The current apporach uses the following logic:
VM_BUG_ON_PAGE(page_ref_count(page), page);
set_page_count(page, 1);
However, if _refcount changes from 0 to 1 between the VM_BUG_ON_PAGE()
and set_page_count() we can break _refcount, which can cause other
problems such as memory corruptions.
Instead, use a safer method: increment _refcount first and verify
that at increment time it was indeed 1.
refcnt = page_ref_inc_return(page);
VM_BUG_ON_PAGE(refcnt != 1, page);
Use page_ref_inc_return() to avoid unconditionally overwriting
the _refcount value with set_page_count(), and check the return value.
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
mm/internal.h | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index deb9bda18e59..4d45ef2ffea6 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -162,9 +162,11 @@ static inline bool page_evictable(struct page *page)
*/
static inline void set_page_refcounted(struct page *page)
{
+ int refcnt;
+
VM_BUG_ON_PAGE(PageTail(page), page);
- VM_BUG_ON_PAGE(page_ref_count(page), page);
- set_page_count(page, 1);
+ refcnt = page_ref_inc_return(page);
+ VM_BUG_ON_PAGE(refcnt != 1, page);
}
extern unsigned long highest_memmap_pfn;
--
2.34.1.307.g9b7440fafd-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 3/9] mm: remove set_page_count() from page_frag_alloc_align
2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 2/9] mm: Avoid using set_page_count() in set_page_recounted() Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 4/9] mm: avoid using set_page_count() when pages are freed into allocator Pasha Tatashin
` (5 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
songmuchun, weixugc, gthelen, rientjes, pjt
set_page_count() unconditionally resets the value of _ref_count and that
is dangerous, as it is not programmatically verified. Instead we rely on
comments like: "OK, page count is 0, we can safely set it".
Add a new refcount function: page_ref_add_return() to return the new
refcount value after adding to it. Use the return value to verify that
the _ref_count was indeed the expected one.
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
include/linux/page_ref.h | 11 +++++++++++
mm/page_alloc.c | 6 ++++--
2 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index fe4864f7f69c..03e21ce2f1bd 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -115,6 +115,17 @@ static inline void init_page_count(struct page *page)
set_page_count(page, 1);
}
+static inline int page_ref_add_return(struct page *page, int nr)
+{
+ int old_val = atomic_fetch_add(nr, &page->_refcount);
+ int new_val = old_val + nr;
+
+ VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
+ if (page_ref_tracepoint_active(page_ref_mod_and_return))
+ __page_ref_mod_and_return(page, nr, new_val);
+ return new_val;
+}
+
static inline void page_ref_add(struct page *page, int nr)
{
int old_val = atomic_fetch_add(nr, &page->_refcount);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index edfd6c81af82..b5554767b9de 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5523,6 +5523,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc,
unsigned int size = PAGE_SIZE;
struct page *page;
int offset;
+ int refcnt;
if (unlikely(!nc->va)) {
refill:
@@ -5561,8 +5562,9 @@ void *page_frag_alloc_align(struct page_frag_cache *nc,
/* if size can vary use size else just use PAGE_SIZE */
size = nc->size;
#endif
- /* OK, page count is 0, we can safely set it */
- set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
+ /* page count is 0, set it to PAGE_FRAG_CACHE_MAX_SIZE + 1 */
+ refcnt = page_ref_add_return(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
+ VM_BUG_ON_PAGE(refcnt != PAGE_FRAG_CACHE_MAX_SIZE + 1, page);
/* reset page count bias and offset to start of new frag */
nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
--
2.34.1.307.g9b7440fafd-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 4/9] mm: avoid using set_page_count() when pages are freed into allocator
2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
` (2 preceding siblings ...)
2021-12-21 15:01 ` [PATCH v2 3/9] mm: remove set_page_count() from page_frag_alloc_align Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 5/9] mm: rename init_page_count() -> page_ref_init() Pasha Tatashin
` (4 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
songmuchun, weixugc, gthelen, rientjes, pjt
When struct pages are first initialized the page->_refcount field is
set 1. However, later when pages are freed into allocator we set
_refcount to 0 via set_page_count(). Unconditionally resetting
_refcount is dangerous.
Instead use page_ref_dec_return(), and verify that the _refcount is
what is expected.
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
mm/page_alloc.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b5554767b9de..13d989d62012 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1667,6 +1667,7 @@ void __free_pages_core(struct page *page, unsigned int order)
unsigned int nr_pages = 1 << order;
struct page *p = page;
unsigned int loop;
+ int refcnt;
/*
* When initializing the memmap, __init_single_page() sets the refcount
@@ -1677,10 +1678,12 @@ void __free_pages_core(struct page *page, unsigned int order)
for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
prefetchw(p + 1);
__ClearPageReserved(p);
- set_page_count(p, 0);
+ refcnt = page_ref_dec_return(p);
+ VM_BUG_ON_PAGE(refcnt, p);
}
__ClearPageReserved(p);
- set_page_count(p, 0);
+ refcnt = page_ref_dec_return(p);
+ VM_BUG_ON_PAGE(refcnt, p);
atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
@@ -2252,10 +2255,12 @@ void __init init_cma_reserved_pageblock(struct page *page)
{
unsigned i = pageblock_nr_pages;
struct page *p = page;
+ int refcnt;
do {
__ClearPageReserved(p);
- set_page_count(p, 0);
+ refcnt = page_ref_dec_return(p);
+ VM_BUG_ON_PAGE(refcnt, p);
} while (++p, --i);
set_pageblock_migratetype(page, MIGRATE_CMA);
--
2.34.1.307.g9b7440fafd-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 5/9] mm: rename init_page_count() -> page_ref_init()
2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
` (3 preceding siblings ...)
2021-12-21 15:01 ` [PATCH v2 4/9] mm: avoid using set_page_count() when pages are freed into allocator Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 6/9] mm: remove set_page_count() Pasha Tatashin
` (3 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
songmuchun, weixugc, gthelen, rientjes, pjt
Now, that set_page_count() is not called from outside anymore and about
to be removed, init_page_count() is the only function that is going to
be used to unconditionally set _refcount, however it is restricted to set
it only to 1.
Make init_page_count() aligned with the other page_ref_*
functions by renaming it.
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
---
arch/m68k/mm/motorola.c | 2 +-
include/linux/mm.h | 2 +-
include/linux/page_ref.h | 10 +++++++---
mm/page_alloc.c | 2 +-
4 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index ecbe948f4c1a..dd3b77d03d5c 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -133,7 +133,7 @@ void __init init_pointer_table(void *table, int type)
/* unreserve the page so it's possible to free that page */
__ClearPageReserved(PD_PAGE(dp));
- init_page_count(PD_PAGE(dp));
+ page_ref_init(PD_PAGE(dp));
return;
}
diff --git a/include/linux/mm.h b/include/linux/mm.h
index d211a06784d5..fae3b6ef66a5 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2451,7 +2451,7 @@ extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end);
static inline void free_reserved_page(struct page *page)
{
ClearPageReserved(page);
- init_page_count(page);
+ page_ref_init(page);
__free_page(page);
adjust_managed_page_count(page, 1);
}
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 03e21ce2f1bd..1af12a0d7ba1 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -107,10 +107,14 @@ static inline void folio_set_count(struct folio *folio, int v)
}
/*
- * Setup the page count before being freed into the page allocator for
- * the first time (boot or memory hotplug)
+ * Setup the page refcount to one before being freed into the page allocator.
+ * The memory might not be initialized and therefore there cannot be any
+ * assumptions about the current value of page->_refcount. This call should be
+ * done during boot when memory is being initialized, during memory hotplug
+ * when new memory is added, or when a previous reserved memory is unreserved
+ * this is the first time kernel take control of the given memory.
*/
-static inline void init_page_count(struct page *page)
+static inline void page_ref_init(struct page *page)
{
set_page_count(page, 1);
}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 13d989d62012..000c057a2d24 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1569,7 +1569,7 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn,
{
mm_zero_struct_page(page);
set_page_links(page, zone, nid, pfn);
- init_page_count(page);
+ page_ref_init(page);
page_mapcount_reset(page);
page_cpupid_reset_last(page);
page_kasan_tag_reset(page);
--
2.34.1.307.g9b7440fafd-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 6/9] mm: remove set_page_count()
2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
` (4 preceding siblings ...)
2021-12-21 15:01 ` [PATCH v2 5/9] mm: rename init_page_count() -> page_ref_init() Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 7/9] mm: simplify page_ref_* functions Pasha Tatashin
` (2 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
songmuchun, weixugc, gthelen, rientjes, pjt
set_page_count() is dangerous because it resets _refcount to an
arbitrary value. Instead we now initialize _refcount to 1 only once,
and the rest of the time we are using add/dec/cmpxchg to have a
contiguous track of the counter.
Remove set_page_count() and add new tracing hooks to page_ref_init().
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
include/linux/page_ref.h | 27 ++++++++-----------
include/trace/events/page_ref.h | 46 ++++++++++++++++++++++++++++-----
mm/debug_page_ref.c | 8 +++---
3 files changed, 54 insertions(+), 27 deletions(-)
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 1af12a0d7ba1..d7316881626c 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -7,7 +7,7 @@
#include <linux/page-flags.h>
#include <linux/tracepoint-defs.h>
-DECLARE_TRACEPOINT(page_ref_set);
+DECLARE_TRACEPOINT(page_ref_init);
DECLARE_TRACEPOINT(page_ref_mod);
DECLARE_TRACEPOINT(page_ref_mod_and_test);
DECLARE_TRACEPOINT(page_ref_mod_and_return);
@@ -26,7 +26,7 @@ DECLARE_TRACEPOINT(page_ref_unfreeze);
*/
#define page_ref_tracepoint_active(t) tracepoint_enabled(t)
-extern void __page_ref_set(struct page *page, int v);
+extern void __page_ref_init(struct page *page);
extern void __page_ref_mod(struct page *page, int v);
extern void __page_ref_mod_and_test(struct page *page, int v, int ret);
extern void __page_ref_mod_and_return(struct page *page, int v, int ret);
@@ -38,7 +38,7 @@ extern void __page_ref_unfreeze(struct page *page, int v);
#define page_ref_tracepoint_active(t) false
-static inline void __page_ref_set(struct page *page, int v)
+static inline void __page_ref_init(struct page *page)
{
}
static inline void __page_ref_mod(struct page *page, int v)
@@ -94,18 +94,6 @@ static inline int page_count(const struct page *page)
return folio_ref_count(page_folio(page));
}
-static inline void set_page_count(struct page *page, int v)
-{
- atomic_set(&page->_refcount, v);
- if (page_ref_tracepoint_active(page_ref_set))
- __page_ref_set(page, v);
-}
-
-static inline void folio_set_count(struct folio *folio, int v)
-{
- set_page_count(&folio->page, v);
-}
-
/*
* Setup the page refcount to one before being freed into the page allocator.
* The memory might not be initialized and therefore there cannot be any
@@ -116,7 +104,14 @@ static inline void folio_set_count(struct folio *folio, int v)
*/
static inline void page_ref_init(struct page *page)
{
- set_page_count(page, 1);
+ atomic_set(&page->_refcount, 1);
+ if (page_ref_tracepoint_active(page_ref_init))
+ __page_ref_init(page);
+}
+
+static inline void folio_ref_init(struct folio *folio)
+{
+ page_ref_init(&folio->page);
}
static inline int page_ref_add_return(struct page *page, int nr)
diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h
index 8a99c1cd417b..87551bb1df9e 100644
--- a/include/trace/events/page_ref.h
+++ b/include/trace/events/page_ref.h
@@ -10,6 +10,45 @@
#include <linux/tracepoint.h>
#include <trace/events/mmflags.h>
+DECLARE_EVENT_CLASS(page_ref_init_template,
+
+ TP_PROTO(struct page *page),
+
+ TP_ARGS(page),
+
+ TP_STRUCT__entry(
+ __field(unsigned long, pfn)
+ __field(unsigned long, flags)
+ __field(int, count)
+ __field(int, mapcount)
+ __field(void *, mapping)
+ __field(int, mt)
+ __field(int, val)
+ ),
+
+ TP_fast_assign(
+ __entry->pfn = page_to_pfn(page);
+ __entry->flags = page->flags;
+ __entry->count = page_ref_count(page);
+ __entry->mapcount = page_mapcount(page);
+ __entry->mapping = page->mapping;
+ __entry->mt = get_pageblock_migratetype(page);
+ ),
+
+ TP_printk("pfn=0x%lx flags=%s count=%d mapcount=%d mapping=%p mt=%d",
+ __entry->pfn,
+ show_page_flags(__entry->flags & PAGEFLAGS_MASK),
+ __entry->count,
+ __entry->mapcount, __entry->mapping, __entry->mt)
+);
+
+DEFINE_EVENT(page_ref_init_template, page_ref_init,
+
+ TP_PROTO(struct page *page),
+
+ TP_ARGS(page)
+);
+
DECLARE_EVENT_CLASS(page_ref_mod_template,
TP_PROTO(struct page *page, int v),
@@ -44,13 +83,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_template,
__entry->val)
);
-DEFINE_EVENT(page_ref_mod_template, page_ref_set,
-
- TP_PROTO(struct page *page, int v),
-
- TP_ARGS(page, v)
-);
-
DEFINE_EVENT(page_ref_mod_template, page_ref_mod,
TP_PROTO(struct page *page, int v),
diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c
index f3b2c9d3ece2..e32149734122 100644
--- a/mm/debug_page_ref.c
+++ b/mm/debug_page_ref.c
@@ -5,12 +5,12 @@
#define CREATE_TRACE_POINTS
#include <trace/events/page_ref.h>
-void __page_ref_set(struct page *page, int v)
+void __page_ref_init(struct page *page)
{
- trace_page_ref_set(page, v);
+ trace_page_ref_init(page);
}
-EXPORT_SYMBOL(__page_ref_set);
-EXPORT_TRACEPOINT_SYMBOL(page_ref_set);
+EXPORT_SYMBOL(__page_ref_init);
+EXPORT_TRACEPOINT_SYMBOL(page_ref_init);
void __page_ref_mod(struct page *page, int v)
{
--
2.34.1.307.g9b7440fafd-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 7/9] mm: simplify page_ref_* functions
2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
` (5 preceding siblings ...)
2021-12-21 15:01 ` [PATCH v2 6/9] mm: remove set_page_count() Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 8/9] mm: do not use atomic_set_release in page_ref_unfreeze() Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze() Pasha Tatashin
8 siblings, 0 replies; 10+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
songmuchun, weixugc, gthelen, rientjes, pjt
Now, that we are using atomic_fetch* variants to add/sub/inc/dec page
_refcount, it makes sense to combined page_ref_* return and non return
functions.
Also remove some extra trace points for non-return variants. This
improves the tracability by always recording the new _refcount value
after the modifications has occurred.
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
include/linux/page_ref.h | 102 +++++++++-----------------------
include/trace/events/page_ref.h | 18 +-----
mm/debug_page_ref.c | 14 -----
3 files changed, 31 insertions(+), 103 deletions(-)
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index d7316881626c..243fc60ae6c8 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -8,8 +8,6 @@
#include <linux/tracepoint-defs.h>
DECLARE_TRACEPOINT(page_ref_init);
-DECLARE_TRACEPOINT(page_ref_mod);
-DECLARE_TRACEPOINT(page_ref_mod_and_test);
DECLARE_TRACEPOINT(page_ref_mod_and_return);
DECLARE_TRACEPOINT(page_ref_mod_unless);
DECLARE_TRACEPOINT(page_ref_freeze);
@@ -27,8 +25,6 @@ DECLARE_TRACEPOINT(page_ref_unfreeze);
#define page_ref_tracepoint_active(t) tracepoint_enabled(t)
extern void __page_ref_init(struct page *page);
-extern void __page_ref_mod(struct page *page, int v);
-extern void __page_ref_mod_and_test(struct page *page, int v, int ret);
extern void __page_ref_mod_and_return(struct page *page, int v, int ret);
extern void __page_ref_mod_unless(struct page *page, int v, int u);
extern void __page_ref_freeze(struct page *page, int v, int ret);
@@ -41,12 +37,6 @@ extern void __page_ref_unfreeze(struct page *page, int v);
static inline void __page_ref_init(struct page *page)
{
}
-static inline void __page_ref_mod(struct page *page, int v)
-{
-}
-static inline void __page_ref_mod_and_test(struct page *page, int v, int ret)
-{
-}
static inline void __page_ref_mod_and_return(struct page *page, int v, int ret)
{
}
@@ -127,12 +117,7 @@ static inline int page_ref_add_return(struct page *page, int nr)
static inline void page_ref_add(struct page *page, int nr)
{
- int old_val = atomic_fetch_add(nr, &page->_refcount);
- int new_val = old_val + nr;
-
- VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
- if (page_ref_tracepoint_active(page_ref_mod))
- __page_ref_mod(page, nr);
+ page_ref_add_return(page, nr);
}
static inline void folio_ref_add(struct folio *folio, int nr)
@@ -140,30 +125,25 @@ static inline void folio_ref_add(struct folio *folio, int nr)
page_ref_add(&folio->page, nr);
}
-static inline void page_ref_sub(struct page *page, int nr)
+static inline int page_ref_sub_return(struct page *page, int nr)
{
int old_val = atomic_fetch_sub(nr, &page->_refcount);
int new_val = old_val - nr;
VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
- if (page_ref_tracepoint_active(page_ref_mod))
- __page_ref_mod(page, -nr);
+ if (page_ref_tracepoint_active(page_ref_mod_and_return))
+ __page_ref_mod_and_return(page, -nr, new_val);
+ return new_val;
}
-static inline void folio_ref_sub(struct folio *folio, int nr)
+static inline void page_ref_sub(struct page *page, int nr)
{
- page_ref_sub(&folio->page, nr);
+ page_ref_sub_return(page, nr);
}
-static inline int page_ref_sub_return(struct page *page, int nr)
+static inline void folio_ref_sub(struct folio *folio, int nr)
{
- int old_val = atomic_fetch_sub(nr, &page->_refcount);
- int new_val = old_val - nr;
-
- VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
- if (page_ref_tracepoint_active(page_ref_mod_and_return))
- __page_ref_mod_and_return(page, -nr, new_val);
- return new_val;
+ page_ref_sub(&folio->page, nr);
}
static inline int folio_ref_sub_return(struct folio *folio, int nr)
@@ -171,14 +151,20 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr)
return page_ref_sub_return(&folio->page, nr);
}
-static inline void page_ref_inc(struct page *page)
+static inline int page_ref_inc_return(struct page *page)
{
int old_val = atomic_fetch_inc(&page->_refcount);
int new_val = old_val + 1;
VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
- if (page_ref_tracepoint_active(page_ref_mod))
- __page_ref_mod(page, 1);
+ if (page_ref_tracepoint_active(page_ref_mod_and_return))
+ __page_ref_mod_and_return(page, 1, new_val);
+ return new_val;
+}
+
+static inline void page_ref_inc(struct page *page)
+{
+ page_ref_inc_return(page);
}
static inline void folio_ref_inc(struct folio *folio)
@@ -186,14 +172,20 @@ static inline void folio_ref_inc(struct folio *folio)
page_ref_inc(&folio->page);
}
-static inline void page_ref_dec(struct page *page)
+static inline int page_ref_dec_return(struct page *page)
{
int old_val = atomic_fetch_dec(&page->_refcount);
int new_val = old_val - 1;
VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
- if (page_ref_tracepoint_active(page_ref_mod))
- __page_ref_mod(page, -1);
+ if (page_ref_tracepoint_active(page_ref_mod_and_return))
+ __page_ref_mod_and_return(page, -1, new_val);
+ return new_val;
+}
+
+static inline void page_ref_dec(struct page *page)
+{
+ page_ref_dec_return(page);
}
static inline void folio_ref_dec(struct folio *folio)
@@ -203,14 +195,7 @@ static inline void folio_ref_dec(struct folio *folio)
static inline int page_ref_sub_and_test(struct page *page, int nr)
{
- int old_val = atomic_fetch_sub(nr, &page->_refcount);
- int new_val = old_val - nr;
- int ret = new_val == 0;
-
- VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
- if (page_ref_tracepoint_active(page_ref_mod_and_test))
- __page_ref_mod_and_test(page, -nr, ret);
- return ret;
+ return page_ref_sub_return(page, nr) == 0;
}
static inline int folio_ref_sub_and_test(struct folio *folio, int nr)
@@ -218,17 +203,6 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr)
return page_ref_sub_and_test(&folio->page, nr);
}
-static inline int page_ref_inc_return(struct page *page)
-{
- int old_val = atomic_fetch_inc(&page->_refcount);
- int new_val = old_val + 1;
-
- VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page);
- if (page_ref_tracepoint_active(page_ref_mod_and_return))
- __page_ref_mod_and_return(page, 1, new_val);
- return new_val;
-}
-
static inline int folio_ref_inc_return(struct folio *folio)
{
return page_ref_inc_return(&folio->page);
@@ -236,14 +210,7 @@ static inline int folio_ref_inc_return(struct folio *folio)
static inline int page_ref_dec_and_test(struct page *page)
{
- int old_val = atomic_fetch_dec(&page->_refcount);
- int new_val = old_val - 1;
- int ret = new_val == 0;
-
- VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
- if (page_ref_tracepoint_active(page_ref_mod_and_test))
- __page_ref_mod_and_test(page, -1, ret);
- return ret;
+ return page_ref_dec_return(page) == 0;
}
static inline int folio_ref_dec_and_test(struct folio *folio)
@@ -251,17 +218,6 @@ static inline int folio_ref_dec_and_test(struct folio *folio)
return page_ref_dec_and_test(&folio->page);
}
-static inline int page_ref_dec_return(struct page *page)
-{
- int old_val = atomic_fetch_dec(&page->_refcount);
- int new_val = old_val - 1;
-
- VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page);
- if (page_ref_tracepoint_active(page_ref_mod_and_return))
- __page_ref_mod_and_return(page, -1, new_val);
- return new_val;
-}
-
static inline int folio_ref_dec_return(struct folio *folio)
{
return page_ref_dec_return(&folio->page);
diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h
index 87551bb1df9e..35cd795aa7c6 100644
--- a/include/trace/events/page_ref.h
+++ b/include/trace/events/page_ref.h
@@ -49,7 +49,7 @@ DEFINE_EVENT(page_ref_init_template, page_ref_init,
TP_ARGS(page)
);
-DECLARE_EVENT_CLASS(page_ref_mod_template,
+DECLARE_EVENT_CLASS(page_ref_unfreeze_template,
TP_PROTO(struct page *page, int v),
@@ -83,13 +83,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_template,
__entry->val)
);
-DEFINE_EVENT(page_ref_mod_template, page_ref_mod,
-
- TP_PROTO(struct page *page, int v),
-
- TP_ARGS(page, v)
-);
-
DECLARE_EVENT_CLASS(page_ref_mod_and_test_template,
TP_PROTO(struct page *page, int v, int ret),
@@ -126,13 +119,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_and_test_template,
__entry->val, __entry->ret)
);
-DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_test,
-
- TP_PROTO(struct page *page, int v, int ret),
-
- TP_ARGS(page, v, ret)
-);
-
DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_return,
TP_PROTO(struct page *page, int v, int ret),
@@ -154,7 +140,7 @@ DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_freeze,
TP_ARGS(page, v, ret)
);
-DEFINE_EVENT(page_ref_mod_template, page_ref_unfreeze,
+DEFINE_EVENT(page_ref_unfreeze_template, page_ref_unfreeze,
TP_PROTO(struct page *page, int v),
diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c
index e32149734122..1de9d93cca25 100644
--- a/mm/debug_page_ref.c
+++ b/mm/debug_page_ref.c
@@ -12,20 +12,6 @@ void __page_ref_init(struct page *page)
EXPORT_SYMBOL(__page_ref_init);
EXPORT_TRACEPOINT_SYMBOL(page_ref_init);
-void __page_ref_mod(struct page *page, int v)
-{
- trace_page_ref_mod(page, v);
-}
-EXPORT_SYMBOL(__page_ref_mod);
-EXPORT_TRACEPOINT_SYMBOL(page_ref_mod);
-
-void __page_ref_mod_and_test(struct page *page, int v, int ret)
-{
- trace_page_ref_mod_and_test(page, v, ret);
-}
-EXPORT_SYMBOL(__page_ref_mod_and_test);
-EXPORT_TRACEPOINT_SYMBOL(page_ref_mod_and_test);
-
void __page_ref_mod_and_return(struct page *page, int v, int ret)
{
trace_page_ref_mod_and_return(page, v, ret);
--
2.34.1.307.g9b7440fafd-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 8/9] mm: do not use atomic_set_release in page_ref_unfreeze()
2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
` (6 preceding siblings ...)
2021-12-21 15:01 ` [PATCH v2 7/9] mm: simplify page_ref_* functions Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze() Pasha Tatashin
8 siblings, 0 replies; 10+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
songmuchun, weixugc, gthelen, rientjes, pjt
In we set the old _refcount value after verifying that the old value was
indeed 0.
VM_BUG_ON_PAGE(page_count(page) != 0, page);
< the _refcount may change here>
atomic_set_release(&page->_refcount, count);
To avoid the smal gap where _refcount may change lets verify the time
of the _refcount at the time of the set operation.
Use atomic_xchg_release() and at the set time verify that the value
was 0.
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
include/linux/page_ref.h | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 243fc60ae6c8..9efabeff4e06 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -322,10 +322,9 @@ static inline int folio_ref_freeze(struct folio *folio, int count)
static inline void page_ref_unfreeze(struct page *page, int count)
{
- VM_BUG_ON_PAGE(page_count(page) != 0, page);
- VM_BUG_ON(count == 0);
+ int old_val = atomic_xchg_release(&page->_refcount, count);
- atomic_set_release(&page->_refcount, count);
+ VM_BUG_ON_PAGE(count == 0 || old_val != 0, page);
if (page_ref_tracepoint_active(page_ref_unfreeze))
__page_ref_unfreeze(page, count);
}
--
2.34.1.307.g9b7440fafd-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v2 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze().
2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
` (7 preceding siblings ...)
2021-12-21 15:01 ` [PATCH v2 8/9] mm: do not use atomic_set_release in page_ref_unfreeze() Pasha Tatashin
@ 2021-12-21 15:01 ` Pasha Tatashin
8 siblings, 0 replies; 10+ messages in thread
From: Pasha Tatashin @ 2021-12-21 15:01 UTC (permalink / raw)
To: pasha.tatashin, linux-kernel, linux-mm, linux-m68k,
anshuman.khandual, willy, akpm, william.kucharski, mike.kravetz,
vbabka, geert, schmitzmic, rostedt, mingo, hannes, guro,
songmuchun, weixugc, gthelen, rientjes, pjt
page_ref_freeze and page_ref_unfreeze are designed to be used as a pair.
They protect critical sections where struct page can be modified.
page_ref_unfreeze() is protected by _release() atomic operation, but
page_ref_freeze() is not as it is assumed that cmpxch provides the full
barrier.
Instead, use the appropriate atomic_cmpxchg_acquire() to ensure that
memory model is excplicitly followed.
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
include/linux/page_ref.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 9efabeff4e06..45be731d8919 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -308,7 +308,8 @@ static inline bool folio_try_get_rcu(struct folio *folio)
static inline int page_ref_freeze(struct page *page, int count)
{
- int ret = likely(atomic_cmpxchg(&page->_refcount, count, 0) == count);
+ int old_val = atomic_cmpxchg_acquire(&page->_refcount, count, 0);
+ int ret = likely(old_val == count);
if (page_ref_tracepoint_active(page_ref_freeze))
__page_ref_freeze(page, count, ret);
--
2.34.1.307.g9b7440fafd-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
end of thread, other threads:[~2021-12-21 15:02 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-21 15:01 [PATCH v2 0/9] Hardening page _refcount Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 1/9] mm: add overflow and underflow checks for page->_refcount Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 2/9] mm: Avoid using set_page_count() in set_page_recounted() Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 3/9] mm: remove set_page_count() from page_frag_alloc_align Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 4/9] mm: avoid using set_page_count() when pages are freed into allocator Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 5/9] mm: rename init_page_count() -> page_ref_init() Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 6/9] mm: remove set_page_count() Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 7/9] mm: simplify page_ref_* functions Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 8/9] mm: do not use atomic_set_release in page_ref_unfreeze() Pasha Tatashin
2021-12-21 15:01 ` [PATCH v2 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze() Pasha Tatashin
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.