All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm/slub, kunit: Make slub_kunit pass even when SLAB_RED_ZONE flag is set
@ 2022-03-16 14:38 Hyeonggon Yoo
  2022-03-16 22:11 ` Vlastimil Babka
  0 siblings, 1 reply; 8+ messages in thread
From: Hyeonggon Yoo @ 2022-03-16 14:38 UTC (permalink / raw)
  To: linux-mm
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin, 42.hyeyoo,
	Oliver Glitta

Testcase test_next_pointer in slub_kunit fails when SLAB_RED_ZONE flag
is globally set. This is because on_freelist() cuts corrupted freelist
chain and does not update cut objects' redzone to SLUB_RED_ACTIVE.

When the test validates a slab that whose freelist is cut, it expects
redzone of objects unreachable by freelist is set to SLUB_RED_ACTIVE.
And it reports "Left Redzone overritten" error because the expectation
failed.

This patch makes slub_kunit expect two more errors for reporting and
fixing red overwritten error when SLAB_RED_ZONE flag is set.

The test passes on slub_debug and slub_debug=Z after this patch.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 lib/slub_kunit.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
index 8662dc6cb509..7cf1fb5a7fde 100644
--- a/lib/slub_kunit.c
+++ b/lib/slub_kunit.c
@@ -45,21 +45,36 @@ static void test_next_pointer(struct kunit *test)
 	 * Expecting three errors.
 	 * One for the corrupted freechain and the other one for the wrong
 	 * count of objects in use. The third error is fixing broken cache.
+	 *
+	 * When flag SLUB_RED_ZONE is set, we expect two more errors for reporting
+	 * and fixing overwritten redzone error. This two errors are detected
+	 * because SLUB cuts corrupted freelist in on_freelist(), but does not
+	 * update its redzone to SLUB_RED_ACTIVE.
 	 */
 	validate_slab_cache(s);
-	KUNIT_EXPECT_EQ(test, 3, slab_errors);
+
+	if (s->flags & SLAB_RED_ZONE)
+		KUNIT_EXPECT_EQ(test, 5, slab_errors);
+	else
+		KUNIT_EXPECT_EQ(test, 3, slab_errors);
 
 	/*
 	 * Try to repair corrupted freepointer.
 	 * Still expecting two errors. The first for the wrong count
 	 * of objects in use.
 	 * The second error is for fixing broken cache.
+	 *
+	 * When SLUB_RED_ZONE flag is set, we expect two more errors
+	 * for same reason as above.
 	 */
 	*ptr_addr = tmp;
 	slab_errors = 0;
 
 	validate_slab_cache(s);
-	KUNIT_EXPECT_EQ(test, 2, slab_errors);
+	if (s->flags & SLAB_RED_ZONE)
+		KUNIT_EXPECT_EQ(test, 4, slab_errors);
+	else
+		KUNIT_EXPECT_EQ(test, 2, slab_errors);
 
 	/*
 	 * Previous validation repaired the count of objects in use.
-- 
2.33.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/slub, kunit: Make slub_kunit pass even when SLAB_RED_ZONE flag is set
  2022-03-16 14:38 [PATCH] mm/slub, kunit: Make slub_kunit pass even when SLAB_RED_ZONE flag is set Hyeonggon Yoo
@ 2022-03-16 22:11 ` Vlastimil Babka
  2022-03-17  7:06   ` Hyeonggon Yoo
  2022-03-17  8:10   ` [PATCH] mm/slub, kunit: Make slub_kunit unaffected by global slub debugging flags Hyeonggon Yoo
  0 siblings, 2 replies; 8+ messages in thread
From: Vlastimil Babka @ 2022-03-16 22:11 UTC (permalink / raw)
  To: Hyeonggon Yoo, linux-mm
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Roman Gushchin, Oliver Glitta

On 3/16/22 15:38, Hyeonggon Yoo wrote:
> Testcase test_next_pointer in slub_kunit fails when SLAB_RED_ZONE flag
> is globally set. This is because on_freelist() cuts corrupted freelist
> chain and does not update cut objects' redzone to SLUB_RED_ACTIVE.
> 
> When the test validates a slab that whose freelist is cut, it expects
> redzone of objects unreachable by freelist is set to SLUB_RED_ACTIVE.
> And it reports "Left Redzone overritten" error because the expectation
> failed.
> 
> This patch makes slub_kunit expect two more errors for reporting and
> fixing red overwritten error when SLAB_RED_ZONE flag is set.
> 
> The test passes on slub_debug and slub_debug=Z after this patch.

Hmm I think it's not optimal strategy for unit tests to adapt like this to
external influence. It seems rather fragile. The test cases should be
designed to test a specific condition and that's it. So maybe we could e.g.
introduce a new SLAB_ flag passed to kmem_cache_create that tells it to
ignore any globally specified slub debug flags?

> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> ---
>  lib/slub_kunit.c | 19 +++++++++++++++++--
>  1 file changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> index 8662dc6cb509..7cf1fb5a7fde 100644
> --- a/lib/slub_kunit.c
> +++ b/lib/slub_kunit.c
> @@ -45,21 +45,36 @@ static void test_next_pointer(struct kunit *test)
>  	 * Expecting three errors.
>  	 * One for the corrupted freechain and the other one for the wrong
>  	 * count of objects in use. The third error is fixing broken cache.
> +	 *
> +	 * When flag SLUB_RED_ZONE is set, we expect two more errors for reporting
> +	 * and fixing overwritten redzone error. This two errors are detected
> +	 * because SLUB cuts corrupted freelist in on_freelist(), but does not
> +	 * update its redzone to SLUB_RED_ACTIVE.
>  	 */
>  	validate_slab_cache(s);
> -	KUNIT_EXPECT_EQ(test, 3, slab_errors);
> +
> +	if (s->flags & SLAB_RED_ZONE)
> +		KUNIT_EXPECT_EQ(test, 5, slab_errors);
> +	else
> +		KUNIT_EXPECT_EQ(test, 3, slab_errors);
>  
>  	/*
>  	 * Try to repair corrupted freepointer.
>  	 * Still expecting two errors. The first for the wrong count
>  	 * of objects in use.
>  	 * The second error is for fixing broken cache.
> +	 *
> +	 * When SLUB_RED_ZONE flag is set, we expect two more errors
> +	 * for same reason as above.
>  	 */
>  	*ptr_addr = tmp;
>  	slab_errors = 0;
>  
>  	validate_slab_cache(s);
> -	KUNIT_EXPECT_EQ(test, 2, slab_errors);
> +	if (s->flags & SLAB_RED_ZONE)
> +		KUNIT_EXPECT_EQ(test, 4, slab_errors);
> +	else
> +		KUNIT_EXPECT_EQ(test, 2, slab_errors);
>  
>  	/*
>  	 * Previous validation repaired the count of objects in use.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/slub, kunit: Make slub_kunit pass even when SLAB_RED_ZONE flag is set
  2022-03-16 22:11 ` Vlastimil Babka
@ 2022-03-17  7:06   ` Hyeonggon Yoo
  2022-03-17  8:10   ` [PATCH] mm/slub, kunit: Make slub_kunit unaffected by global slub debugging flags Hyeonggon Yoo
  1 sibling, 0 replies; 8+ messages in thread
From: Hyeonggon Yoo @ 2022-03-17  7:06 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: linux-mm, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Roman Gushchin, Oliver Glitta

On Wed, Mar 16, 2022 at 11:11:08PM +0100, Vlastimil Babka wrote:
> On 3/16/22 15:38, Hyeonggon Yoo wrote:
> > Testcase test_next_pointer in slub_kunit fails when SLAB_RED_ZONE flag
> > is globally set. This is because on_freelist() cuts corrupted freelist
> > chain and does not update cut objects' redzone to SLUB_RED_ACTIVE.
> > 
> > When the test validates a slab that whose freelist is cut, it expects
> > redzone of objects unreachable by freelist is set to SLUB_RED_ACTIVE.
> > And it reports "Left Redzone overritten" error because the expectation
> > failed.
> > 
> > This patch makes slub_kunit expect two more errors for reporting and
> > fixing red overwritten error when SLAB_RED_ZONE flag is set.
> > 
> > The test passes on slub_debug and slub_debug=Z after this patch.
> 
> Hmm I think it's not optimal strategy for unit tests to adapt like this to
> external influence. It seems rather fragile. The test cases should be
> designed to test a specific condition and that's it. So maybe we could e.g.
> introduce a new SLAB_ flag passed to kmem_cache_create that tells it to
> ignore any globally specified slub debug flags?

Agree. It's so easy to be broken. I think your suggestion is
good for unit tests. I'll send a patch.

BTW, The situation is that SLUB makes (by cutting freelist chain)
objects invalid (invalid redzone) and reporting what caused by SLUB
itself is quite ugly.

No simple solution comes into mind but yeah... it's not a big problem
as we rarely call validate_slab_cache().

> > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > ---
> >  lib/slub_kunit.c | 19 +++++++++++++++++--
> >  1 file changed, 17 insertions(+), 2 deletions(-)
> > 
> > diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> > index 8662dc6cb509..7cf1fb5a7fde 100644
> > --- a/lib/slub_kunit.c
> > +++ b/lib/slub_kunit.c
> > @@ -45,21 +45,36 @@ static void test_next_pointer(struct kunit *test)
> >  	 * Expecting three errors.
> >  	 * One for the corrupted freechain and the other one for the wrong
> >  	 * count of objects in use. The third error is fixing broken cache.
> > +	 *
> > +	 * When flag SLUB_RED_ZONE is set, we expect two more errors for reporting
> > +	 * and fixing overwritten redzone error. This two errors are detected
> > +	 * because SLUB cuts corrupted freelist in on_freelist(), but does not
> > +	 * update its redzone to SLUB_RED_ACTIVE.
> >  	 */
> >  	validate_slab_cache(s);
> > -	KUNIT_EXPECT_EQ(test, 3, slab_errors);
> > +
> > +	if (s->flags & SLAB_RED_ZONE)
> > +		KUNIT_EXPECT_EQ(test, 5, slab_errors);
> > +	else
> > +		KUNIT_EXPECT_EQ(test, 3, slab_errors);
> >  
> >  	/*
> >  	 * Try to repair corrupted freepointer.
> >  	 * Still expecting two errors. The first for the wrong count
> >  	 * of objects in use.
> >  	 * The second error is for fixing broken cache.
> > +	 *
> > +	 * When SLUB_RED_ZONE flag is set, we expect two more errors
> > +	 * for same reason as above.
> >  	 */
> >  	*ptr_addr = tmp;
> >  	slab_errors = 0;
> >  
> >  	validate_slab_cache(s);
> > -	KUNIT_EXPECT_EQ(test, 2, slab_errors);
> > +	if (s->flags & SLAB_RED_ZONE)
> > +		KUNIT_EXPECT_EQ(test, 4, slab_errors);
> > +	else
> > +		KUNIT_EXPECT_EQ(test, 2, slab_errors);
> >  
> >  	/*
> >  	 * Previous validation repaired the count of objects in use.
> 

-- 
Thank you, You are awesome!
Hyeonggon :-)


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] mm/slub, kunit: Make slub_kunit unaffected by global slub debugging flags
  2022-03-16 22:11 ` Vlastimil Babka
  2022-03-17  7:06   ` Hyeonggon Yoo
@ 2022-03-17  8:10   ` Hyeonggon Yoo
  2022-04-05 10:58     ` Vlastimil Babka
  1 sibling, 1 reply; 8+ messages in thread
From: Hyeonggon Yoo @ 2022-03-17  8:10 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: linux-mm, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Roman Gushchin, Oliver Glitta,
	Hyeonggon Yoo

slub_kunit does not expect other debugging flags to be set when running
tests. When SLAB_RED_ZONE flag is set globally, test fails because the
flag affects number of errors reported.

To make slub_kunit unaffected by global slub debugging flags, introduce
SLAB_NO_GLOBAL_FLAGS to ignore them. It's still allowed to specify
debugging flags by specifying cache name(s) in slub_debug parameter.

Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 include/linux/slab.h |  3 +++
 lib/slub_kunit.c     | 10 +++++-----
 mm/slab.h            |  5 +++--
 mm/slub.c            |  3 +++
 4 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 0381868e5118..11fe2c28422d 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -112,6 +112,9 @@
 #define SLAB_KASAN		0
 #endif
 
+/* Ignore globally specified debugging flags */
+#define SLAB_NO_GLOBAL_FLAGS	((slab_flags_t __force)0x10000000U)
+
 /* The following flags affect the page allocator grouping pages by mobility */
 /* Objects are reclaimable */
 #define SLAB_RECLAIM_ACCOUNT	((slab_flags_t __force)0x00020000U)
diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
index 8662dc6cb509..acf061dc558d 100644
--- a/lib/slub_kunit.c
+++ b/lib/slub_kunit.c
@@ -12,7 +12,7 @@ static int slab_errors;
 static void test_clobber_zone(struct kunit *test)
 {
 	struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc", 64, 0,
-				SLAB_RED_ZONE, NULL);
+				SLAB_RED_ZONE|SLAB_NO_GLOBAL_FLAGS, NULL);
 	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
 
 	kasan_disable_current();
@@ -30,7 +30,7 @@ static void test_clobber_zone(struct kunit *test)
 static void test_next_pointer(struct kunit *test)
 {
 	struct kmem_cache *s = kmem_cache_create("TestSlub_next_ptr_free", 64, 0,
-				SLAB_POISON, NULL);
+				SLAB_POISON|SLAB_NO_GLOBAL_FLAGS, NULL);
 	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
 	unsigned long tmp;
 	unsigned long *ptr_addr;
@@ -75,7 +75,7 @@ static void test_next_pointer(struct kunit *test)
 static void test_first_word(struct kunit *test)
 {
 	struct kmem_cache *s = kmem_cache_create("TestSlub_1th_word_free", 64, 0,
-				SLAB_POISON, NULL);
+				SLAB_POISON|SLAB_NO_GLOBAL_FLAGS, NULL);
 	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
 
 	kmem_cache_free(s, p);
@@ -90,7 +90,7 @@ static void test_first_word(struct kunit *test)
 static void test_clobber_50th_byte(struct kunit *test)
 {
 	struct kmem_cache *s = kmem_cache_create("TestSlub_50th_word_free", 64, 0,
-				SLAB_POISON, NULL);
+				SLAB_POISON|SLAB_NO_GLOBAL_FLAGS, NULL);
 	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
 
 	kmem_cache_free(s, p);
@@ -106,7 +106,7 @@ static void test_clobber_50th_byte(struct kunit *test)
 static void test_clobber_redzone_free(struct kunit *test)
 {
 	struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free", 64, 0,
-				SLAB_RED_ZONE, NULL);
+				SLAB_RED_ZONE|SLAB_NO_GLOBAL_FLAGS, NULL);
 	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
 
 	kasan_disable_current();
diff --git a/mm/slab.h b/mm/slab.h
index c7f2abc2b154..69946131208a 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -330,7 +330,7 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
 			  SLAB_ACCOUNT)
 #elif defined(CONFIG_SLUB)
 #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \
-			  SLAB_TEMPORARY | SLAB_ACCOUNT)
+			  SLAB_TEMPORARY | SLAB_ACCOUNT | SLAB_NO_GLOBAL_FLAGS)
 #else
 #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE)
 #endif
@@ -349,7 +349,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
 			      SLAB_NOLEAKTRACE | \
 			      SLAB_RECLAIM_ACCOUNT | \
 			      SLAB_TEMPORARY | \
-			      SLAB_ACCOUNT)
+			      SLAB_ACCOUNT | \
+			      SLAB_NO_GLOBAL_FLAGS)
 
 bool __kmem_cache_empty(struct kmem_cache *);
 int __kmem_cache_shutdown(struct kmem_cache *);
diff --git a/mm/slub.c b/mm/slub.c
index 71e8663f6037..2a3cffd7b27f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1592,6 +1592,9 @@ slab_flags_t kmem_cache_flags(unsigned int object_size,
 	if (flags & SLAB_NOLEAKTRACE)
 		slub_debug_local &= ~SLAB_STORE_USER;
 
+	if (flags & SLAB_NO_GLOBAL_FLAGS)
+		slub_debug_local = 0;
+
 	len = strlen(name);
 	next_block = slub_debug_string;
 	/* Go through all blocks of debug options, see if any matches our slab's name */
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/slub, kunit: Make slub_kunit unaffected by global slub debugging flags
  2022-03-17  8:10   ` [PATCH] mm/slub, kunit: Make slub_kunit unaffected by global slub debugging flags Hyeonggon Yoo
@ 2022-04-05 10:58     ` Vlastimil Babka
  2022-04-06  6:00       ` [PATCH v2] mm/slub, kunit: Make slub_kunit unaffected by user specified flags Hyeonggon Yoo
  2022-04-06  6:06       ` [PATCH] mm/slub, kunit: Make slub_kunit unaffected by global slub debugging flags Hyeonggon Yoo
  0 siblings, 2 replies; 8+ messages in thread
From: Vlastimil Babka @ 2022-04-05 10:58 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: linux-mm, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Roman Gushchin, Oliver Glitta

On 3/17/22 09:10, Hyeonggon Yoo wrote:
> slub_kunit does not expect other debugging flags to be set when running
> tests. When SLAB_RED_ZONE flag is set globally, test fails because the
> flag affects number of errors reported.
> 
> To make slub_kunit unaffected by global slub debugging flags, introduce
> SLAB_NO_GLOBAL_FLAGS to ignore them. It's still allowed to specify
> debugging flags by specifying cache name(s) in slub_debug parameter.

Given how we support globbing, I think it would be safest to just ignore
everything that comes from slub_debug parameter when the
SLAB_NO_GLOBAL_FLAGS flag is specified, even if it involves a (partial)
cache name match.
Maybe name it SLAB_NO_USER_FLAGS then?

> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> ---
>  include/linux/slab.h |  3 +++
>  lib/slub_kunit.c     | 10 +++++-----
>  mm/slab.h            |  5 +++--
>  mm/slub.c            |  3 +++
>  4 files changed, 14 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 0381868e5118..11fe2c28422d 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -112,6 +112,9 @@
>  #define SLAB_KASAN		0
>  #endif
>  
> +/* Ignore globally specified debugging flags */

I'd add that this is intended for caches created for self-tests so they
always have flags as specified in the code.

> +#define SLAB_NO_GLOBAL_FLAGS	((slab_flags_t __force)0x10000000U)
> +
>  /* The following flags affect the page allocator grouping pages by mobility */
>  /* Objects are reclaimable */
>  #define SLAB_RECLAIM_ACCOUNT	((slab_flags_t __force)0x00020000U)
> diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> index 8662dc6cb509..acf061dc558d 100644
> --- a/lib/slub_kunit.c
> +++ b/lib/slub_kunit.c
> @@ -12,7 +12,7 @@ static int slab_errors;
>  static void test_clobber_zone(struct kunit *test)
>  {
>  	struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc", 64, 0,
> -				SLAB_RED_ZONE, NULL);
> +				SLAB_RED_ZONE|SLAB_NO_GLOBAL_FLAGS, NULL);
>  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>  
>  	kasan_disable_current();
> @@ -30,7 +30,7 @@ static void test_clobber_zone(struct kunit *test)
>  static void test_next_pointer(struct kunit *test)
>  {
>  	struct kmem_cache *s = kmem_cache_create("TestSlub_next_ptr_free", 64, 0,
> -				SLAB_POISON, NULL);
> +				SLAB_POISON|SLAB_NO_GLOBAL_FLAGS, NULL);
>  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>  	unsigned long tmp;
>  	unsigned long *ptr_addr;
> @@ -75,7 +75,7 @@ static void test_next_pointer(struct kunit *test)
>  static void test_first_word(struct kunit *test)
>  {
>  	struct kmem_cache *s = kmem_cache_create("TestSlub_1th_word_free", 64, 0,
> -				SLAB_POISON, NULL);
> +				SLAB_POISON|SLAB_NO_GLOBAL_FLAGS, NULL);
>  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>  
>  	kmem_cache_free(s, p);
> @@ -90,7 +90,7 @@ static void test_first_word(struct kunit *test)
>  static void test_clobber_50th_byte(struct kunit *test)
>  {
>  	struct kmem_cache *s = kmem_cache_create("TestSlub_50th_word_free", 64, 0,
> -				SLAB_POISON, NULL);
> +				SLAB_POISON|SLAB_NO_GLOBAL_FLAGS, NULL);
>  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>  
>  	kmem_cache_free(s, p);
> @@ -106,7 +106,7 @@ static void test_clobber_50th_byte(struct kunit *test)
>  static void test_clobber_redzone_free(struct kunit *test)
>  {
>  	struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free", 64, 0,
> -				SLAB_RED_ZONE, NULL);
> +				SLAB_RED_ZONE|SLAB_NO_GLOBAL_FLAGS, NULL);
>  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>  
>  	kasan_disable_current();
> diff --git a/mm/slab.h b/mm/slab.h
> index c7f2abc2b154..69946131208a 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -330,7 +330,7 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
>  			  SLAB_ACCOUNT)
>  #elif defined(CONFIG_SLUB)
>  #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \
> -			  SLAB_TEMPORARY | SLAB_ACCOUNT)
> +			  SLAB_TEMPORARY | SLAB_ACCOUNT | SLAB_NO_GLOBAL_FLAGS)
>  #else
>  #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE)
>  #endif
> @@ -349,7 +349,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
>  			      SLAB_NOLEAKTRACE | \
>  			      SLAB_RECLAIM_ACCOUNT | \
>  			      SLAB_TEMPORARY | \
> -			      SLAB_ACCOUNT)
> +			      SLAB_ACCOUNT | \
> +			      SLAB_NO_GLOBAL_FLAGS)
>  
>  bool __kmem_cache_empty(struct kmem_cache *);
>  int __kmem_cache_shutdown(struct kmem_cache *);
> diff --git a/mm/slub.c b/mm/slub.c
> index 71e8663f6037..2a3cffd7b27f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1592,6 +1592,9 @@ slab_flags_t kmem_cache_flags(unsigned int object_size,
>  	if (flags & SLAB_NOLEAKTRACE)
>  		slub_debug_local &= ~SLAB_STORE_USER;
>  
> +	if (flags & SLAB_NO_GLOBAL_FLAGS)
> +		slub_debug_local = 0;
> +
>  	len = strlen(name);
>  	next_block = slub_debug_string;
>  	/* Go through all blocks of debug options, see if any matches our slab's name */



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2] mm/slub, kunit: Make slub_kunit unaffected by user specified flags
  2022-04-05 10:58     ` Vlastimil Babka
@ 2022-04-06  6:00       ` Hyeonggon Yoo
  2022-04-06  8:17         ` Vlastimil Babka
  2022-04-06  6:06       ` [PATCH] mm/slub, kunit: Make slub_kunit unaffected by global slub debugging flags Hyeonggon Yoo
  1 sibling, 1 reply; 8+ messages in thread
From: Hyeonggon Yoo @ 2022-04-06  6:00 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: linux-mm, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Roman Gushchin, Oliver Glitta

slub_kunit does not expect other debugging flags to be set when running
tests. When SLAB_RED_ZONE flag is set globally, test fails because the
flag affects number of errors reported.

To make slub_kunit unaffected by user specified debugging flags,
introduce SLAB_NO_USER_FLAGS to ignore them. With this flag, only flags
specified in the code are used and others are ignored.

Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 include/linux/slab.h |  7 +++++++
 lib/slub_kunit.c     | 10 +++++-----
 mm/slab.h            |  5 +++--
 mm/slub.c            |  3 +++
 4 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 373b3ef99f4e..11ceddcae9f4 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -112,6 +112,13 @@
 #define SLAB_KASAN		0
 #endif
 
+/*
+ * Ignore user specified debugging flags.
+ * Intended for caches created for self-tests so they have only flags
+ * specified in the code and other flags are ignored.
+ */
+#define SLAB_NO_USER_FLAGS	((slab_flags_t __force)0x10000000U)
+
 /* The following flags affect the page allocator grouping pages by mobility */
 /* Objects are reclaimable */
 #define SLAB_RECLAIM_ACCOUNT	((slab_flags_t __force)0x00020000U)
diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
index 8662dc6cb509..7a0564d7cb7a 100644
--- a/lib/slub_kunit.c
+++ b/lib/slub_kunit.c
@@ -12,7 +12,7 @@ static int slab_errors;
 static void test_clobber_zone(struct kunit *test)
 {
 	struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc", 64, 0,
-				SLAB_RED_ZONE, NULL);
+				SLAB_RED_ZONE|SLAB_NO_USER_FLAGS, NULL);
 	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
 
 	kasan_disable_current();
@@ -30,7 +30,7 @@ static void test_clobber_zone(struct kunit *test)
 static void test_next_pointer(struct kunit *test)
 {
 	struct kmem_cache *s = kmem_cache_create("TestSlub_next_ptr_free", 64, 0,
-				SLAB_POISON, NULL);
+				SLAB_POISON|SLAB_NO_USER_FLAGS, NULL);
 	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
 	unsigned long tmp;
 	unsigned long *ptr_addr;
@@ -75,7 +75,7 @@ static void test_next_pointer(struct kunit *test)
 static void test_first_word(struct kunit *test)
 {
 	struct kmem_cache *s = kmem_cache_create("TestSlub_1th_word_free", 64, 0,
-				SLAB_POISON, NULL);
+				SLAB_POISON|SLAB_NO_USER_FLAGS, NULL);
 	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
 
 	kmem_cache_free(s, p);
@@ -90,7 +90,7 @@ static void test_first_word(struct kunit *test)
 static void test_clobber_50th_byte(struct kunit *test)
 {
 	struct kmem_cache *s = kmem_cache_create("TestSlub_50th_word_free", 64, 0,
-				SLAB_POISON, NULL);
+				SLAB_POISON|SLAB_NO_USER_FLAGS, NULL);
 	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
 
 	kmem_cache_free(s, p);
@@ -106,7 +106,7 @@ static void test_clobber_50th_byte(struct kunit *test)
 static void test_clobber_redzone_free(struct kunit *test)
 {
 	struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free", 64, 0,
-				SLAB_RED_ZONE, NULL);
+				SLAB_RED_ZONE|SLAB_NO_USER_FLAGS, NULL);
 	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
 
 	kasan_disable_current();
diff --git a/mm/slab.h b/mm/slab.h
index fd7ae2024897..f7d018100994 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -331,7 +331,7 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
 			  SLAB_ACCOUNT)
 #elif defined(CONFIG_SLUB)
 #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \
-			  SLAB_TEMPORARY | SLAB_ACCOUNT)
+			  SLAB_TEMPORARY | SLAB_ACCOUNT | SLAB_NO_USER_FLAGS)
 #else
 #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE)
 #endif
@@ -350,7 +350,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
 			      SLAB_NOLEAKTRACE | \
 			      SLAB_RECLAIM_ACCOUNT | \
 			      SLAB_TEMPORARY | \
-			      SLAB_ACCOUNT)
+			      SLAB_ACCOUNT | \
+			      SLAB_NO_USER_FLAGS)
 
 bool __kmem_cache_empty(struct kmem_cache *);
 int __kmem_cache_shutdown(struct kmem_cache *);
diff --git a/mm/slub.c b/mm/slub.c
index 74d92aa4a3a2..4c78f5919356 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1584,6 +1584,9 @@ slab_flags_t kmem_cache_flags(unsigned int object_size,
 	slab_flags_t block_flags;
 	slab_flags_t slub_debug_local = slub_debug;
 
+	if (flags & SLAB_NO_USER_FLAGS)
+		return flags;
+
 	/*
 	 * If the slab cache is for debugging (e.g. kmemleak) then
 	 * don't store user (stack trace) information by default,
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/slub, kunit: Make slub_kunit unaffected by global slub debugging flags
  2022-04-05 10:58     ` Vlastimil Babka
  2022-04-06  6:00       ` [PATCH v2] mm/slub, kunit: Make slub_kunit unaffected by user specified flags Hyeonggon Yoo
@ 2022-04-06  6:06       ` Hyeonggon Yoo
  1 sibling, 0 replies; 8+ messages in thread
From: Hyeonggon Yoo @ 2022-04-06  6:06 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: linux-mm, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Roman Gushchin, Oliver Glitta

On Tue, Apr 05, 2022 at 12:58:53PM +0200, Vlastimil Babka wrote:
> On 3/17/22 09:10, Hyeonggon Yoo wrote:
> > slub_kunit does not expect other debugging flags to be set when running
> > tests. When SLAB_RED_ZONE flag is set globally, test fails because the
> > flag affects number of errors reported.
> > 
> > To make slub_kunit unaffected by global slub debugging flags, introduce
> > SLAB_NO_GLOBAL_FLAGS to ignore them. It's still allowed to specify
> > debugging flags by specifying cache name(s) in slub_debug parameter.
> 
> Given how we support globbing, I think it would be safest to just ignore
> everything that comes from slub_debug parameter when the
> SLAB_NO_GLOBAL_FLAGS flag is specified, even if it involves a (partial)
> cache name match.
> Maybe name it SLAB_NO_USER_FLAGS then?
> 

Seems reasonable. letting your specify debugging flags for self-test
cache is not useful for any case.

Let's make self-test caches completely isolated from user-specified flags.

I sent v2 with your comments in mind.

> > Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > ---
> >  include/linux/slab.h |  3 +++
> >  lib/slub_kunit.c     | 10 +++++-----
> >  mm/slab.h            |  5 +++--
> >  mm/slub.c            |  3 +++
> >  4 files changed, 14 insertions(+), 7 deletions(-)
> > 
> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > index 0381868e5118..11fe2c28422d 100644
> > --- a/include/linux/slab.h
> > +++ b/include/linux/slab.h
> > @@ -112,6 +112,9 @@
> >  #define SLAB_KASAN		0
> >  #endif
> >  
> > +/* Ignore globally specified debugging flags */
> 
> I'd add that this is intended for caches created for self-tests so they
> always have flags as specified in the code.
>

Did it in v2.

Thanks!
Hyeonggon

> > +#define SLAB_NO_GLOBAL_FLAGS	((slab_flags_t __force)0x10000000U)
> > +
> >  /* The following flags affect the page allocator grouping pages by mobility */
> >  /* Objects are reclaimable */
> >  #define SLAB_RECLAIM_ACCOUNT	((slab_flags_t __force)0x00020000U)
> > diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> > index 8662dc6cb509..acf061dc558d 100644
> > --- a/lib/slub_kunit.c
> > +++ b/lib/slub_kunit.c
> > @@ -12,7 +12,7 @@ static int slab_errors;
> >  static void test_clobber_zone(struct kunit *test)
> >  {
> >  	struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc", 64, 0,
> > -				SLAB_RED_ZONE, NULL);
> > +				SLAB_RED_ZONE|SLAB_NO_GLOBAL_FLAGS, NULL);
> >  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> >  
> >  	kasan_disable_current();
> > @@ -30,7 +30,7 @@ static void test_clobber_zone(struct kunit *test)
> >  static void test_next_pointer(struct kunit *test)
> >  {
> >  	struct kmem_cache *s = kmem_cache_create("TestSlub_next_ptr_free", 64, 0,
> > -				SLAB_POISON, NULL);
> > +				SLAB_POISON|SLAB_NO_GLOBAL_FLAGS, NULL);
> >  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> >  	unsigned long tmp;
> >  	unsigned long *ptr_addr;
> > @@ -75,7 +75,7 @@ static void test_next_pointer(struct kunit *test)
> >  static void test_first_word(struct kunit *test)
> >  {
> >  	struct kmem_cache *s = kmem_cache_create("TestSlub_1th_word_free", 64, 0,
> > -				SLAB_POISON, NULL);
> > +				SLAB_POISON|SLAB_NO_GLOBAL_FLAGS, NULL);
> >  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> >  
> >  	kmem_cache_free(s, p);
> > @@ -90,7 +90,7 @@ static void test_first_word(struct kunit *test)
> >  static void test_clobber_50th_byte(struct kunit *test)
> >  {
> >  	struct kmem_cache *s = kmem_cache_create("TestSlub_50th_word_free", 64, 0,
> > -				SLAB_POISON, NULL);
> > +				SLAB_POISON|SLAB_NO_GLOBAL_FLAGS, NULL);
> >  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> >  
> >  	kmem_cache_free(s, p);
> > @@ -106,7 +106,7 @@ static void test_clobber_50th_byte(struct kunit *test)
> >  static void test_clobber_redzone_free(struct kunit *test)
> >  {
> >  	struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free", 64, 0,
> > -				SLAB_RED_ZONE, NULL);
> > +				SLAB_RED_ZONE|SLAB_NO_GLOBAL_FLAGS, NULL);
> >  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> >  
> >  	kasan_disable_current();
> > diff --git a/mm/slab.h b/mm/slab.h
> > index c7f2abc2b154..69946131208a 100644
> > --- a/mm/slab.h
> > +++ b/mm/slab.h
> > @@ -330,7 +330,7 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
> >  			  SLAB_ACCOUNT)
> >  #elif defined(CONFIG_SLUB)
> >  #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \
> > -			  SLAB_TEMPORARY | SLAB_ACCOUNT)
> > +			  SLAB_TEMPORARY | SLAB_ACCOUNT | SLAB_NO_GLOBAL_FLAGS)
> >  #else
> >  #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE)
> >  #endif
> > @@ -349,7 +349,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
> >  			      SLAB_NOLEAKTRACE | \
> >  			      SLAB_RECLAIM_ACCOUNT | \
> >  			      SLAB_TEMPORARY | \
> > -			      SLAB_ACCOUNT)
> > +			      SLAB_ACCOUNT | \
> > +			      SLAB_NO_GLOBAL_FLAGS)
> >  
> >  bool __kmem_cache_empty(struct kmem_cache *);
> >  int __kmem_cache_shutdown(struct kmem_cache *);
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 71e8663f6037..2a3cffd7b27f 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -1592,6 +1592,9 @@ slab_flags_t kmem_cache_flags(unsigned int object_size,
> >  	if (flags & SLAB_NOLEAKTRACE)
> >  		slub_debug_local &= ~SLAB_STORE_USER;
> >  
> > +	if (flags & SLAB_NO_GLOBAL_FLAGS)
> > +		slub_debug_local = 0;
> > +
> >  	len = strlen(name);
> >  	next_block = slub_debug_string;
> >  	/* Go through all blocks of debug options, see if any matches our slab's name */
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/slub, kunit: Make slub_kunit unaffected by user specified flags
  2022-04-06  6:00       ` [PATCH v2] mm/slub, kunit: Make slub_kunit unaffected by user specified flags Hyeonggon Yoo
@ 2022-04-06  8:17         ` Vlastimil Babka
  0 siblings, 0 replies; 8+ messages in thread
From: Vlastimil Babka @ 2022-04-06  8:17 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: linux-mm, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Roman Gushchin, Oliver Glitta

On 4/6/22 08:00, Hyeonggon Yoo wrote:
> slub_kunit does not expect other debugging flags to be set when running
> tests. When SLAB_RED_ZONE flag is set globally, test fails because the
> flag affects number of errors reported.
> 
> To make slub_kunit unaffected by user specified debugging flags,
> introduce SLAB_NO_USER_FLAGS to ignore them. With this flag, only flags
> specified in the code are used and others are ignored.
> 
> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Thanks, applied.

> ---
>  include/linux/slab.h |  7 +++++++
>  lib/slub_kunit.c     | 10 +++++-----
>  mm/slab.h            |  5 +++--
>  mm/slub.c            |  3 +++
>  4 files changed, 18 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 373b3ef99f4e..11ceddcae9f4 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -112,6 +112,13 @@
>  #define SLAB_KASAN		0
>  #endif
>  
> +/*
> + * Ignore user specified debugging flags.
> + * Intended for caches created for self-tests so they have only flags
> + * specified in the code and other flags are ignored.
> + */
> +#define SLAB_NO_USER_FLAGS	((slab_flags_t __force)0x10000000U)
> +
>  /* The following flags affect the page allocator grouping pages by mobility */
>  /* Objects are reclaimable */
>  #define SLAB_RECLAIM_ACCOUNT	((slab_flags_t __force)0x00020000U)
> diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> index 8662dc6cb509..7a0564d7cb7a 100644
> --- a/lib/slub_kunit.c
> +++ b/lib/slub_kunit.c
> @@ -12,7 +12,7 @@ static int slab_errors;
>  static void test_clobber_zone(struct kunit *test)
>  {
>  	struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc", 64, 0,
> -				SLAB_RED_ZONE, NULL);
> +				SLAB_RED_ZONE|SLAB_NO_USER_FLAGS, NULL);
>  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>  
>  	kasan_disable_current();
> @@ -30,7 +30,7 @@ static void test_clobber_zone(struct kunit *test)
>  static void test_next_pointer(struct kunit *test)
>  {
>  	struct kmem_cache *s = kmem_cache_create("TestSlub_next_ptr_free", 64, 0,
> -				SLAB_POISON, NULL);
> +				SLAB_POISON|SLAB_NO_USER_FLAGS, NULL);
>  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>  	unsigned long tmp;
>  	unsigned long *ptr_addr;
> @@ -75,7 +75,7 @@ static void test_next_pointer(struct kunit *test)
>  static void test_first_word(struct kunit *test)
>  {
>  	struct kmem_cache *s = kmem_cache_create("TestSlub_1th_word_free", 64, 0,
> -				SLAB_POISON, NULL);
> +				SLAB_POISON|SLAB_NO_USER_FLAGS, NULL);
>  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>  
>  	kmem_cache_free(s, p);
> @@ -90,7 +90,7 @@ static void test_first_word(struct kunit *test)
>  static void test_clobber_50th_byte(struct kunit *test)
>  {
>  	struct kmem_cache *s = kmem_cache_create("TestSlub_50th_word_free", 64, 0,
> -				SLAB_POISON, NULL);
> +				SLAB_POISON|SLAB_NO_USER_FLAGS, NULL);
>  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>  
>  	kmem_cache_free(s, p);
> @@ -106,7 +106,7 @@ static void test_clobber_50th_byte(struct kunit *test)
>  static void test_clobber_redzone_free(struct kunit *test)
>  {
>  	struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free", 64, 0,
> -				SLAB_RED_ZONE, NULL);
> +				SLAB_RED_ZONE|SLAB_NO_USER_FLAGS, NULL);
>  	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>  
>  	kasan_disable_current();
> diff --git a/mm/slab.h b/mm/slab.h
> index fd7ae2024897..f7d018100994 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -331,7 +331,7 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
>  			  SLAB_ACCOUNT)
>  #elif defined(CONFIG_SLUB)
>  #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \
> -			  SLAB_TEMPORARY | SLAB_ACCOUNT)
> +			  SLAB_TEMPORARY | SLAB_ACCOUNT | SLAB_NO_USER_FLAGS)
>  #else
>  #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE)
>  #endif
> @@ -350,7 +350,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
>  			      SLAB_NOLEAKTRACE | \
>  			      SLAB_RECLAIM_ACCOUNT | \
>  			      SLAB_TEMPORARY | \
> -			      SLAB_ACCOUNT)
> +			      SLAB_ACCOUNT | \
> +			      SLAB_NO_USER_FLAGS)
>  
>  bool __kmem_cache_empty(struct kmem_cache *);
>  int __kmem_cache_shutdown(struct kmem_cache *);
> diff --git a/mm/slub.c b/mm/slub.c
> index 74d92aa4a3a2..4c78f5919356 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1584,6 +1584,9 @@ slab_flags_t kmem_cache_flags(unsigned int object_size,
>  	slab_flags_t block_flags;
>  	slab_flags_t slub_debug_local = slub_debug;
>  
> +	if (flags & SLAB_NO_USER_FLAGS)
> +		return flags;
> +
>  	/*
>  	 * If the slab cache is for debugging (e.g. kmemleak) then
>  	 * don't store user (stack trace) information by default,



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-04-06  8:17 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-16 14:38 [PATCH] mm/slub, kunit: Make slub_kunit pass even when SLAB_RED_ZONE flag is set Hyeonggon Yoo
2022-03-16 22:11 ` Vlastimil Babka
2022-03-17  7:06   ` Hyeonggon Yoo
2022-03-17  8:10   ` [PATCH] mm/slub, kunit: Make slub_kunit unaffected by global slub debugging flags Hyeonggon Yoo
2022-04-05 10:58     ` Vlastimil Babka
2022-04-06  6:00       ` [PATCH v2] mm/slub, kunit: Make slub_kunit unaffected by user specified flags Hyeonggon Yoo
2022-04-06  8:17         ` Vlastimil Babka
2022-04-06  6:06       ` [PATCH] mm/slub, kunit: Make slub_kunit unaffected by global slub debugging flags Hyeonggon Yoo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.