All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] rearrange struct slab fields to allow larger rcu_head
@ 2022-11-07 17:05 Vlastimil Babka
  2022-11-07 17:05 ` [PATCH v2 1/3] mm/slub: perform free consistency checks before call_rcu Vlastimil Babka
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Vlastimil Babka @ 2022-11-07 17:05 UTC (permalink / raw)
  To: Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg,
	Joel Fernandes
  Cc: Hyeonggon Yoo, Roman Gushchin, Matthew Wilcox, paulmck, rcu,
	linux-mm, linux-kernel, patches, Vlastimil Babka

Hi,

The previous version (RFC, no cover letter) is here:
https://lore.kernel.org/all/20220826090912.11292-1-vbabka@suse.cz/

Git branch is here:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git/log/?h=slab/for-6.2/fit_rcu_head
(also in linux-next since late last week)

The rationale for doing all this is in patch 3 - I hope there are still
plans for the rcu_head debugging, Joel?

The previous version was in linux-next, which brought crash reports due
to causing false positive __PageMovable() tests. There were several
attempts to deal with it, as explained in Patch 2, which is an updated
version of one of those attempts. It hasn't been formally posted and
reviewed yet, hence this posting.

Thanks,
Vlastimil

Vlastimil Babka (3):
  mm/slub: perform free consistency checks before call_rcu
  mm/migrate: make isolate_movable_page() skip slab pages
  mm/sl[au]b: rearrange struct slab fields to allow larger rcu_head

 mm/migrate.c | 15 ++++++++++++---
 mm/slab.c    |  6 +++++-
 mm/slab.h    | 54 +++++++++++++++++++++++++++++++---------------------
 mm/slub.c    | 26 ++++++++++++++-----------
 4 files changed, 64 insertions(+), 37 deletions(-)

-- 
2.38.0


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2 1/3] mm/slub: perform free consistency checks before call_rcu
  2022-11-07 17:05 [PATCH v2 0/3] rearrange struct slab fields to allow larger rcu_head Vlastimil Babka
@ 2022-11-07 17:05 ` Vlastimil Babka
  2022-11-07 17:05 ` [PATCH v2 2/3] mm/migrate: make isolate_movable_page() skip slab pages Vlastimil Babka
  2022-11-07 17:05 ` [PATCH v2 3/3] mm/sl[au]b: rearrange struct slab fields to allow larger rcu_head Vlastimil Babka
  2 siblings, 0 replies; 5+ messages in thread
From: Vlastimil Babka @ 2022-11-07 17:05 UTC (permalink / raw)
  To: Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg,
	Joel Fernandes
  Cc: Hyeonggon Yoo, Roman Gushchin, Matthew Wilcox, paulmck, rcu,
	linux-mm, linux-kernel, patches, Vlastimil Babka

For SLAB_TYPESAFE_BY_RCU caches we use call_rcu to perform empty slab
freeing. The rcu callback rcu_free_slab() calls __free_slab() that
currently includes checking the slab consistency for caches with
SLAB_CONSISTENCY_CHECKS flags. This check needs the slab->objects field
to be intact.

Because in the next patch we want to allow rcu_head in struct slab to
become larger in debug configurations and thus potentially overwrite
more fields through a union than slab_list, we want to limit the fields
used in rcu_free_slab().  Thus move the consistency checks to
free_slab() before call_rcu(). This can be done safely even for
SLAB_TYPESAFE_BY_RCU caches where accesses to the objects can still
occur after freeing them.

As a result, only the slab->slab_cache field has to be physically
separate from rcu_head for the freeing callback to work. We also save
some cycles in the rcu callback for caches with consistency checks
enabled.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/slub.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 157527d7101b..99ba865afc4a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1999,14 +1999,6 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab)
 	int order = folio_order(folio);
 	int pages = 1 << order;
 
-	if (kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) {
-		void *p;
-
-		slab_pad_check(s, slab);
-		for_each_object(p, s, slab_address(slab), slab->objects)
-			check_object(s, slab, p, SLUB_RED_INACTIVE);
-	}
-
 	__slab_clear_pfmemalloc(slab);
 	__folio_clear_slab(folio);
 	folio->mapping = NULL;
@@ -2025,9 +2017,17 @@ static void rcu_free_slab(struct rcu_head *h)
 
 static void free_slab(struct kmem_cache *s, struct slab *slab)
 {
-	if (unlikely(s->flags & SLAB_TYPESAFE_BY_RCU)) {
+	if (kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) {
+		void *p;
+
+		slab_pad_check(s, slab);
+		for_each_object(p, s, slab_address(slab), slab->objects)
+			check_object(s, slab, p, SLUB_RED_INACTIVE);
+	}
+
+	if (unlikely(s->flags & SLAB_TYPESAFE_BY_RCU))
 		call_rcu(&slab->rcu_head, rcu_free_slab);
-	} else
+	else
 		__free_slab(s, slab);
 }
 
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 2/3] mm/migrate: make isolate_movable_page() skip slab pages
  2022-11-07 17:05 [PATCH v2 0/3] rearrange struct slab fields to allow larger rcu_head Vlastimil Babka
  2022-11-07 17:05 ` [PATCH v2 1/3] mm/slub: perform free consistency checks before call_rcu Vlastimil Babka
@ 2022-11-07 17:05 ` Vlastimil Babka
  2022-11-10 13:48   ` Hyeonggon Yoo
  2022-11-07 17:05 ` [PATCH v2 3/3] mm/sl[au]b: rearrange struct slab fields to allow larger rcu_head Vlastimil Babka
  2 siblings, 1 reply; 5+ messages in thread
From: Vlastimil Babka @ 2022-11-07 17:05 UTC (permalink / raw)
  To: Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg,
	Joel Fernandes
  Cc: Hyeonggon Yoo, Roman Gushchin, Matthew Wilcox, paulmck, rcu,
	linux-mm, linux-kernel, patches, Vlastimil Babka,
	kernel test robot

In the next commit we want to rearrange struct slab fields to allow a larger
rcu_head. Afterwards, the page->mapping field will overlap with SLUB's "struct
list_head slab_list", where the value of prev pointer can become LIST_POISON2,
which is 0x122 + POISON_POINTER_DELTA.  Unfortunately the bit 1 being set can
confuse PageMovable() to be a false positive and cause a GPF as reported by lkp
[1].

To fix this, make isolate_movable_page() skip pages with the PageSlab flag set.
This is a bit tricky as we need to add memory barriers to SLAB and SLUB's page
allocation and freeing, and their counterparts to isolate_movable_page().

Based on my RFC from [2]. Added a comment update from Matthew's variant in [3]
and, as done there, moved the PageSlab checks to happen before trying to take
the page lock.

[1] https://lore.kernel.org/all/208c1757-5edd-fd42-67d4-1940cc43b50f@intel.com/
[2] https://lore.kernel.org/all/aec59f53-0e53-1736-5932-25407125d4d4@suse.cz/
[3] https://lore.kernel.org/all/YzsVM8eToHUeTP75@casper.infradead.org/

Reported-by: kernel test robot <yujie.liu@intel.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 mm/migrate.c | 15 ++++++++++++---
 mm/slab.c    |  6 +++++-
 mm/slub.c    |  6 +++++-
 3 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 1379e1912772..959c99cff814 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -74,13 +74,22 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode)
 	if (unlikely(!get_page_unless_zero(page)))
 		goto out;
 
+	if (unlikely(PageSlab(page)))
+		goto out_putpage;
+	/* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */
+	smp_rmb();
 	/*
-	 * Check PageMovable before holding a PG_lock because page's owner
-	 * assumes anybody doesn't touch PG_lock of newly allocated page
-	 * so unconditionally grabbing the lock ruins page's owner side.
+	 * Check movable flag before taking the page lock because
+	 * we use non-atomic bitops on newly allocated page flags so
+	 * unconditionally grabbing the lock ruins page's owner side.
 	 */
 	if (unlikely(!__PageMovable(page)))
 		goto out_putpage;
+	/* Pairs with smp_wmb() in slab allocation, e.g. SLUB's alloc_slab_page() */
+	smp_rmb();
+	if (unlikely(PageSlab(page)))
+		goto out_putpage;
+
 	/*
 	 * As movable pages are not isolated from LRU lists, concurrent
 	 * compaction threads can race against page migration functions
diff --git a/mm/slab.c b/mm/slab.c
index 59c8e28f7b6a..219beb48588e 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1370,6 +1370,8 @@ static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
 
 	account_slab(slab, cachep->gfporder, cachep, flags);
 	__folio_set_slab(folio);
+	/* Make the flag visible before any changes to folio->mapping */
+	smp_wmb();
 	/* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */
 	if (sk_memalloc_socks() && page_is_pfmemalloc(folio_page(folio, 0)))
 		slab_set_pfmemalloc(slab);
@@ -1387,9 +1389,11 @@ static void kmem_freepages(struct kmem_cache *cachep, struct slab *slab)
 
 	BUG_ON(!folio_test_slab(folio));
 	__slab_clear_pfmemalloc(slab);
-	__folio_clear_slab(folio);
 	page_mapcount_reset(folio_page(folio, 0));
 	folio->mapping = NULL;
+	/* Make the mapping reset visible before clearing the flag */
+	smp_wmb();
+	__folio_clear_slab(folio);
 
 	if (current->reclaim_state)
 		current->reclaim_state->reclaimed_slab += 1 << order;
diff --git a/mm/slub.c b/mm/slub.c
index 99ba865afc4a..5e6519d5169c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1800,6 +1800,8 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node,
 
 	slab = folio_slab(folio);
 	__folio_set_slab(folio);
+	/* Make the flag visible before any changes to folio->mapping */
+	smp_wmb();
 	if (page_is_pfmemalloc(folio_page(folio, 0)))
 		slab_set_pfmemalloc(slab);
 
@@ -2000,8 +2002,10 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab)
 	int pages = 1 << order;
 
 	__slab_clear_pfmemalloc(slab);
-	__folio_clear_slab(folio);
 	folio->mapping = NULL;
+	/* Make the mapping reset visible before clearing the flag */
+	smp_wmb();
+	__folio_clear_slab(folio);
 	if (current->reclaim_state)
 		current->reclaim_state->reclaimed_slab += pages;
 	unaccount_slab(slab, order, s);
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 3/3] mm/sl[au]b: rearrange struct slab fields to allow larger rcu_head
  2022-11-07 17:05 [PATCH v2 0/3] rearrange struct slab fields to allow larger rcu_head Vlastimil Babka
  2022-11-07 17:05 ` [PATCH v2 1/3] mm/slub: perform free consistency checks before call_rcu Vlastimil Babka
  2022-11-07 17:05 ` [PATCH v2 2/3] mm/migrate: make isolate_movable_page() skip slab pages Vlastimil Babka
@ 2022-11-07 17:05 ` Vlastimil Babka
  2 siblings, 0 replies; 5+ messages in thread
From: Vlastimil Babka @ 2022-11-07 17:05 UTC (permalink / raw)
  To: Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg,
	Joel Fernandes
  Cc: Hyeonggon Yoo, Roman Gushchin, Matthew Wilcox, paulmck, rcu,
	linux-mm, linux-kernel, patches, Vlastimil Babka

Joel reports [1] that increasing the rcu_head size for debugging
purposes used to work before struct slab was split from struct page, but
now runs into the various SLAB_MATCH() sanity checks of the layout.

This is because the rcu_head in struct page is in union with large
sub-structures and has space to grow without exceeding their size, while
in struct slab (for SLAB and SLUB) it's in union only with a list_head.

On closer inspection (and after the previous patch) we can put all
fields except slab_cache to a union with rcu_head, as slab_cache is
sufficient for the rcu freeing callbacks to work and the rest can be
overwritten by rcu_head without causing issues.

This is only somewhat complicated by the need to keep SLUB's
freelist+counters aligned for cmpxchg_double. As a result the fields
need to be reordered so that slab_cache is first (after page flags) and
the union with rcu_head follows. For consistency, do that for SLAB as
well, although not necessary there.

As a result, the rcu_head field in struct page and struct slab is no
longer at the same offset, but that doesn't matter as there is no
casting that would rely on that in the slab freeing callbacks, so we can
just drop the respective SLAB_MATCH() check.

Also we need to update the SLAB_MATCH() for compound_head to reflect the
new ordering.

While at it, also add a static_assert to check the alignment needed for
cmpxchg_double so mistakes are found sooner than a runtime GPF.

[1] https://lore.kernel.org/all/85afd876-d8bb-0804-b2c5-48ed3055e702@joelfernandes.org/

Reported-by: Joel Fernandes <joel@joelfernandes.org>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/slab.h | 54 ++++++++++++++++++++++++++++++++----------------------
 1 file changed, 32 insertions(+), 22 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 0202a8c2f0d2..b373952eef70 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -11,37 +11,43 @@ struct slab {
 
 #if defined(CONFIG_SLAB)
 
+	struct kmem_cache *slab_cache;
 	union {
-		struct list_head slab_list;
+		struct {
+			struct list_head slab_list;
+			void *freelist;	/* array of free object indexes */
+			void *s_mem;	/* first object */
+		};
 		struct rcu_head rcu_head;
 	};
-	struct kmem_cache *slab_cache;
-	void *freelist;	/* array of free object indexes */
-	void *s_mem;	/* first object */
 	unsigned int active;
 
 #elif defined(CONFIG_SLUB)
 
-	union {
-		struct list_head slab_list;
-		struct rcu_head rcu_head;
-#ifdef CONFIG_SLUB_CPU_PARTIAL
-		struct {
-			struct slab *next;
-			int slabs;	/* Nr of slabs left */
-		};
-#endif
-	};
 	struct kmem_cache *slab_cache;
-	/* Double-word boundary */
-	void *freelist;		/* first free object */
 	union {
-		unsigned long counters;
 		struct {
-			unsigned inuse:16;
-			unsigned objects:15;
-			unsigned frozen:1;
+			union {
+				struct list_head slab_list;
+#ifdef CONFIG_SLUB_CPU_PARTIAL
+				struct {
+					struct slab *next;
+					int slabs;	/* Nr of slabs left */
+				};
+#endif
+			};
+			/* Double-word boundary */
+			void *freelist;		/* first free object */
+			union {
+				unsigned long counters;
+				struct {
+					unsigned inuse:16;
+					unsigned objects:15;
+					unsigned frozen:1;
+				};
+			};
 		};
+		struct rcu_head rcu_head;
 	};
 	unsigned int __unused;
 
@@ -66,9 +72,10 @@ struct slab {
 #define SLAB_MATCH(pg, sl)						\
 	static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
 SLAB_MATCH(flags, __page_flags);
-SLAB_MATCH(compound_head, slab_list);	/* Ensure bit 0 is clear */
 #ifndef CONFIG_SLOB
-SLAB_MATCH(rcu_head, rcu_head);
+SLAB_MATCH(compound_head, slab_cache);	/* Ensure bit 0 is clear */
+#else
+SLAB_MATCH(compound_head, slab_list);	/* Ensure bit 0 is clear */
 #endif
 SLAB_MATCH(_refcount, __page_refcount);
 #ifdef CONFIG_MEMCG
@@ -76,6 +83,9 @@ SLAB_MATCH(memcg_data, memcg_data);
 #endif
 #undef SLAB_MATCH
 static_assert(sizeof(struct slab) <= sizeof(struct page));
+#if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && defined(CONFIG_SLUB)
+static_assert(IS_ALIGNED(offsetof(struct slab, freelist), 2*sizeof(void *)));
+#endif
 
 /**
  * folio_slab - Converts from folio to slab.
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 2/3] mm/migrate: make isolate_movable_page() skip slab pages
  2022-11-07 17:05 ` [PATCH v2 2/3] mm/migrate: make isolate_movable_page() skip slab pages Vlastimil Babka
@ 2022-11-10 13:48   ` Hyeonggon Yoo
  0 siblings, 0 replies; 5+ messages in thread
From: Hyeonggon Yoo @ 2022-11-10 13:48 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Christoph Lameter, David Rientjes, Joonsoo Kim, Pekka Enberg,
	Joel Fernandes, Roman Gushchin, Matthew Wilcox, paulmck, rcu,
	linux-mm, linux-kernel, patches, kernel test robot

On Mon, Nov 07, 2022 at 06:05:53PM +0100, Vlastimil Babka wrote:
> In the next commit we want to rearrange struct slab fields to allow a larger
> rcu_head. Afterwards, the page->mapping field will overlap with SLUB's "struct
> list_head slab_list", where the value of prev pointer can become LIST_POISON2,
> which is 0x122 + POISON_POINTER_DELTA.  Unfortunately the bit 1 being set can
> confuse PageMovable() to be a false positive and cause a GPF as reported by lkp
> [1].
> 
> To fix this, make isolate_movable_page() skip pages with the PageSlab flag set.
> This is a bit tricky as we need to add memory barriers to SLAB and SLUB's page
> allocation and freeing, and their counterparts to isolate_movable_page().
> 
> Based on my RFC from [2]. Added a comment update from Matthew's variant in [3]
> and, as done there, moved the PageSlab checks to happen before trying to take
> the page lock.
> 
> [1] https://lore.kernel.org/all/208c1757-5edd-fd42-67d4-1940cc43b50f@intel.com/
> [2] https://lore.kernel.org/all/aec59f53-0e53-1736-5932-25407125d4d4@suse.cz/
> [3] https://lore.kernel.org/all/YzsVM8eToHUeTP75@casper.infradead.org/
> 
> Reported-by: kernel test robot <yujie.liu@intel.com>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> ---
>  mm/migrate.c | 15 ++++++++++++---
>  mm/slab.c    |  6 +++++-
>  mm/slub.c    |  6 +++++-
>  3 files changed, 22 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 1379e1912772..959c99cff814 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -74,13 +74,22 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode)
>  	if (unlikely(!get_page_unless_zero(page)))
>  		goto out;
>  
> +	if (unlikely(PageSlab(page)))
> +		goto out_putpage;
> +	/* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */
> +	smp_rmb();
>  	/*
> -	 * Check PageMovable before holding a PG_lock because page's owner
> -	 * assumes anybody doesn't touch PG_lock of newly allocated page
> -	 * so unconditionally grabbing the lock ruins page's owner side.
> +	 * Check movable flag before taking the page lock because
> +	 * we use non-atomic bitops on newly allocated page flags so
> +	 * unconditionally grabbing the lock ruins page's owner side.
>  	 */
>  	if (unlikely(!__PageMovable(page)))
>  		goto out_putpage;
> +	/* Pairs with smp_wmb() in slab allocation, e.g. SLUB's alloc_slab_page() */
> +	smp_rmb();
> +	if (unlikely(PageSlab(page)))
> +		goto out_putpage;
> +
>  	/*
>  	 * As movable pages are not isolated from LRU lists, concurrent
>  	 * compaction threads can race against page migration functions
> diff --git a/mm/slab.c b/mm/slab.c
> index 59c8e28f7b6a..219beb48588e 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1370,6 +1370,8 @@ static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
>  
>  	account_slab(slab, cachep->gfporder, cachep, flags);
>  	__folio_set_slab(folio);
> +	/* Make the flag visible before any changes to folio->mapping */
> +	smp_wmb();
>  	/* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */
>  	if (sk_memalloc_socks() && page_is_pfmemalloc(folio_page(folio, 0)))
>  		slab_set_pfmemalloc(slab);
> @@ -1387,9 +1389,11 @@ static void kmem_freepages(struct kmem_cache *cachep, struct slab *slab)
>  
>  	BUG_ON(!folio_test_slab(folio));
>  	__slab_clear_pfmemalloc(slab);
> -	__folio_clear_slab(folio);
>  	page_mapcount_reset(folio_page(folio, 0));
>  	folio->mapping = NULL;
> +	/* Make the mapping reset visible before clearing the flag */
> +	smp_wmb();
> +	__folio_clear_slab(folio);
>  
>  	if (current->reclaim_state)
>  		current->reclaim_state->reclaimed_slab += 1 << order;
> diff --git a/mm/slub.c b/mm/slub.c
> index 99ba865afc4a..5e6519d5169c 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1800,6 +1800,8 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node,
>  
>  	slab = folio_slab(folio);
>  	__folio_set_slab(folio);
> +	/* Make the flag visible before any changes to folio->mapping */
> +	smp_wmb();
>  	if (page_is_pfmemalloc(folio_page(folio, 0)))
>  		slab_set_pfmemalloc(slab);
>  
> @@ -2000,8 +2002,10 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab)
>  	int pages = 1 << order;
>  
>  	__slab_clear_pfmemalloc(slab);
> -	__folio_clear_slab(folio);
>  	folio->mapping = NULL;
> +	/* Make the mapping reset visible before clearing the flag */
> +	smp_wmb();
> +	__folio_clear_slab(folio);
>  	if (current->reclaim_state)
>  		current->reclaim_state->reclaimed_slab += pages;
>  	unaccount_slab(slab, order, s);
> -- 
> 2.38.0

This looks correct to me.

Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Just noting to myself to avoid confusion in the future:

- When one sees PageSlab() == false, __PageMovable() == true should not be false positive
  from slab page because resetting ->mapping is visible first and then it clears PG_slab.

- When one sees __PageMoveable() == true for slab page, PageSlab() must be true because
  setting PG_slab in slab allocation is visible first and then it writes to ->mapping field.

I hope it's nicely reshaped after Matthew's frozen refcount series.

-- 
Thanks,
Hyeonggon

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-11-10 13:48 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-07 17:05 [PATCH v2 0/3] rearrange struct slab fields to allow larger rcu_head Vlastimil Babka
2022-11-07 17:05 ` [PATCH v2 1/3] mm/slub: perform free consistency checks before call_rcu Vlastimil Babka
2022-11-07 17:05 ` [PATCH v2 2/3] mm/migrate: make isolate_movable_page() skip slab pages Vlastimil Babka
2022-11-10 13:48   ` Hyeonggon Yoo
2022-11-07 17:05 ` [PATCH v2 3/3] mm/sl[au]b: rearrange struct slab fields to allow larger rcu_head Vlastimil Babka

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.