linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 1/3] kunit: make test->lock irq safe
@ 2021-05-11 15:07 glittao
  2021-05-11 15:07 ` [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality glittao
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: glittao @ 2021-05-11 15:07 UTC (permalink / raw)
  To: brendanhiggins, cl, penberg, rientjes, iamjoonsoo.kim, akpm, vbabka
  Cc: linux-kernel, linux-kselftest, kunit-dev, linux-mm, elver,
	dlatypov, Oliver Glitta

From: Vlastimil Babka <vbabka@suse.cz>

The upcoming SLUB kunit test will be calling kunit_find_named_resource() from
a context with disabled interrupts. That means kunit's test->lock needs to be
IRQ safe to avoid potential deadlocks and lockdep splats.

This patch therefore changes the test->lock usage to spin_lock_irqsave()
and spin_unlock_irqrestore().

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Oliver Glitta <glittao@gmail.com>
---
Changes since v4
Rebased whole series on 5.13-rc1

 include/kunit/test.h |  5 +++--
 lib/kunit/test.c     | 18 +++++++++++-------
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/include/kunit/test.h b/include/kunit/test.h
index 49601c4b98b8..524d4789af22 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -515,8 +515,9 @@ kunit_find_resource(struct kunit *test,
 		    void *match_data)
 {
 	struct kunit_resource *res, *found = NULL;
+	unsigned long flags;

-	spin_lock(&test->lock);
+	spin_lock_irqsave(&test->lock, flags);

 	list_for_each_entry_reverse(res, &test->resources, node) {
 		if (match(test, res, (void *)match_data)) {
@@ -526,7 +527,7 @@ kunit_find_resource(struct kunit *test,
 		}
 	}

-	spin_unlock(&test->lock);
+	spin_unlock_irqrestore(&test->lock, flags);

 	return found;
 }
diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index 2f6cc0123232..45f068864d76 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -475,6 +475,7 @@ int kunit_add_resource(struct kunit *test,
 		       void *data)
 {
 	int ret = 0;
+	unsigned long flags;

 	res->free = free;
 	kref_init(&res->refcount);
@@ -487,10 +488,10 @@ int kunit_add_resource(struct kunit *test,
 		res->data = data;
 	}

-	spin_lock(&test->lock);
+	spin_lock_irqsave(&test->lock, flags);
 	list_add_tail(&res->node, &test->resources);
 	/* refcount for list is established by kref_init() */
-	spin_unlock(&test->lock);
+	spin_unlock_irqrestore(&test->lock, flags);

 	return ret;
 }
@@ -548,9 +549,11 @@ EXPORT_SYMBOL_GPL(kunit_alloc_and_get_resource);

 void kunit_remove_resource(struct kunit *test, struct kunit_resource *res)
 {
-	spin_lock(&test->lock);
+	unsigned long flags;
+
+	spin_lock_irqsave(&test->lock, flags);
 	list_del(&res->node);
-	spin_unlock(&test->lock);
+	spin_unlock_irqrestore(&test->lock, flags);
 	kunit_put_resource(res);
 }
 EXPORT_SYMBOL_GPL(kunit_remove_resource);
@@ -630,6 +633,7 @@ EXPORT_SYMBOL_GPL(kunit_kfree);
 void kunit_cleanup(struct kunit *test)
 {
 	struct kunit_resource *res;
+	unsigned long flags;

 	/*
 	 * test->resources is a stack - each allocation must be freed in the
@@ -641,9 +645,9 @@ void kunit_cleanup(struct kunit *test)
 	 * protect against the current node being deleted, not the next.
 	 */
 	while (true) {
-		spin_lock(&test->lock);
+		spin_lock_irqsave(&test->lock, flags);
 		if (list_empty(&test->resources)) {
-			spin_unlock(&test->lock);
+			spin_unlock_irqrestore(&test->lock, flags);
 			break;
 		}
 		res = list_last_entry(&test->resources,
@@ -654,7 +658,7 @@ void kunit_cleanup(struct kunit *test)
 		 * resource, and this can't happen if the test->lock
 		 * is held.
 		 */
-		spin_unlock(&test->lock);
+		spin_unlock_irqrestore(&test->lock, flags);
 		kunit_remove_resource(test, res);
 	}
 	current->kunit_test = NULL;
--
2.31.1.272.g89b43f80a5



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality
  2021-05-11 15:07 [PATCH v5 1/3] kunit: make test->lock irq safe glittao
@ 2021-05-11 15:07 ` glittao
  2021-05-11 15:16   ` Marco Elver
                     ` (2 more replies)
  2021-05-11 15:07 ` [PATCH v5 3/3] slub: remove resiliency_test() function glittao
  2021-05-12 10:28 ` [PATCH v5 1/3] kunit: make test->lock irq safe Vlastimil Babka
  2 siblings, 3 replies; 11+ messages in thread
From: glittao @ 2021-05-11 15:07 UTC (permalink / raw)
  To: brendanhiggins, cl, penberg, rientjes, iamjoonsoo.kim, akpm, vbabka
  Cc: linux-kernel, linux-kselftest, kunit-dev, linux-mm, elver,
	dlatypov, Oliver Glitta

From: Oliver Glitta <glittao@gmail.com>

SLUB has resiliency_test() function which is hidden behind #ifdef
SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody
runs it. KUnit should be a proper replacement for it.

Try changing byte in redzone after allocation and changing
pointer to next free node, first byte, 50th byte and redzone
byte. Check if validation finds errors.

There are several differences from the original resiliency test:
Tests create own caches with known state instead of corrupting
shared kmalloc caches.

The corruption of freepointer uses correct offset, the original
resiliency test got broken with freepointer changes.

Scratch changing random byte test, because it does not have
meaning in this form where we need deterministic results.

Add new option CONFIG_SLUB_KUNIT_TEST in Kconfig.
Tests next_pointer, first_word and clobber_50th_byte do not run
with KASAN option on. Because the test deliberately modifies non-allocated
objects.

Use kunit_resource to count errors in cache and silence bug reports.
Count error whenever slab_bug() or slab_fix() is called or when
the count of pages is wrong.

Signed-off-by: Oliver Glitta <glittao@gmail.com>
---
Changes since v4
Use two tests with KASAN dependency.
Remove setting current test during init and exit.

Changes since v3

Use kunit_resource to silence bug reports and count errors suggested by
Marco Elver.
Make the test depends on !KASAN thanks to report from the kernel test robot.

Changes since v2

Use bit operation & instead of logical && as reported by kernel test
robot and Dan Carpenter

Changes since v1

Conversion from kselftest to KUnit test suggested by Marco Elver.
Error silencing.
Error counting improvements.

 lib/Kconfig.debug |  12 ++++
 lib/Makefile      |   1 +
 lib/slub_kunit.c  | 155 ++++++++++++++++++++++++++++++++++++++++++++++
 mm/slab.h         |   1 +
 mm/slub.c         |  46 +++++++++++++-
 5 files changed, 212 insertions(+), 3 deletions(-)
 create mode 100644 lib/slub_kunit.c

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 678c13967580..7723f58a9394 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -2429,6 +2429,18 @@ config BITS_TEST

 	  If unsure, say N.

+config SLUB_KUNIT_TEST
+	tristate "KUnit test for SLUB cache error detection" if !KUNIT_ALL_TESTS
+	depends on SLUB_DEBUG && KUNIT
+	default KUNIT_ALL_TESTS
+	help
+	  This builds SLUB allocator unit test.
+	  Tests SLUB cache debugging functionality.
+	  For more information on KUnit and unit tests in general please refer
+	  to the KUnit documentation in Documentation/dev-tools/kunit/.
+
+	  If unsure, say N.
+
 config TEST_UDELAY
 	tristate "udelay test driver"
 	help
diff --git a/lib/Makefile b/lib/Makefile
index e11cfc18b6c0..386215dcb0a0 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -353,5 +353,6 @@ obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o
 obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o
 obj-$(CONFIG_BITS_TEST) += test_bits.o
 obj-$(CONFIG_CMDLINE_KUNIT_TEST) += cmdline_kunit.o
+obj-$(CONFIG_SLUB_KUNIT_TEST) += slub_kunit.o

 obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o
diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
new file mode 100644
index 000000000000..f28965f64ef6
--- /dev/null
+++ b/lib/slub_kunit.c
@@ -0,0 +1,155 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <kunit/test.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include "../mm/slab.h"
+
+static struct kunit_resource resource;
+static int slab_errors;
+
+static void test_clobber_zone(struct kunit *test)
+{
+	struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc", 64, 0,
+				SLAB_RED_ZONE, NULL);
+	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
+
+	kasan_disable_current();
+	p[64] = 0x12;
+
+	validate_slab_cache(s);
+	KUNIT_EXPECT_EQ(test, 2, slab_errors);
+
+	kasan_enable_current();
+	kmem_cache_free(s, p);
+	kmem_cache_destroy(s);
+}
+
+#ifndef CONFIG_KASAN
+static void test_next_pointer(struct kunit *test)
+{
+	struct kmem_cache *s = kmem_cache_create("TestSlub_next_ptr_free", 64, 0,
+				SLAB_POISON, NULL);
+	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
+	unsigned long tmp;
+	unsigned long *ptr_addr;
+
+	kmem_cache_free(s, p);
+
+	ptr_addr = (unsigned long *)(p + s->offset);
+	tmp = *ptr_addr;
+	p[s->offset] = 0x12;
+
+	/*
+	 * Expecting three errors.
+	 * One for the corrupted freechain and the other one for the wrong
+	 * count of objects in use. The third error is fixing broken cache.
+	 */
+	validate_slab_cache(s);
+	KUNIT_EXPECT_EQ(test, 3, slab_errors);
+
+	/*
+	 * Try to repair corrupted freepointer.
+	 * Still expecting two errors. The first for the wrong count
+	 * of objects in use.
+	 * The second error is for fixing broken cache.
+	 */
+	*ptr_addr = tmp;
+	slab_errors = 0;
+
+	validate_slab_cache(s);
+	KUNIT_EXPECT_EQ(test, 2, slab_errors);
+
+	/*
+	 * Previous validation repaired the count of objects in use.
+	 * Now expecting no error.
+	 */
+	slab_errors = 0;
+	validate_slab_cache(s);
+	KUNIT_EXPECT_EQ(test, 0, slab_errors);
+
+	kmem_cache_destroy(s);
+}
+
+static void test_first_word(struct kunit *test)
+{
+	struct kmem_cache *s = kmem_cache_create("TestSlub_1th_word_free", 64, 0,
+				SLAB_POISON, NULL);
+	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
+
+	kmem_cache_free(s, p);
+	*p = 0x78;
+
+	validate_slab_cache(s);
+	KUNIT_EXPECT_EQ(test, 2, slab_errors);
+
+	kmem_cache_destroy(s);
+}
+
+static void test_clobber_50th_byte(struct kunit *test)
+{
+	struct kmem_cache *s = kmem_cache_create("TestSlub_50th_word_free", 64, 0,
+				SLAB_POISON, NULL);
+	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
+
+	kmem_cache_free(s, p);
+	p[50] = 0x9a;
+
+	validate_slab_cache(s);
+	KUNIT_EXPECT_EQ(test, 2, slab_errors);
+
+	kmem_cache_destroy(s);
+}
+#endif
+
+static void test_clobber_redzone_free(struct kunit *test)
+{
+	struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free", 64, 0,
+				SLAB_RED_ZONE, NULL);
+	u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
+
+	kasan_disable_current();
+	kmem_cache_free(s, p);
+	p[64] = 0xab;
+
+	validate_slab_cache(s);
+	KUNIT_EXPECT_EQ(test, 2, slab_errors);
+
+	kasan_enable_current();
+	kmem_cache_destroy(s);
+}
+
+static int test_init(struct kunit *test)
+{
+	slab_errors = 0;
+
+	kunit_add_named_resource(test, NULL, NULL, &resource,
+					"slab_errors", &slab_errors);
+	return 0;
+}
+
+static void test_exit(struct kunit *test) {}
+
+static struct kunit_case test_cases[] = {
+	KUNIT_CASE(test_clobber_zone),
+
+#ifndef CONFIG_KASAN
+	KUNIT_CASE(test_next_pointer),
+	KUNIT_CASE(test_first_word),
+	KUNIT_CASE(test_clobber_50th_byte),
+#endif
+
+	KUNIT_CASE(test_clobber_redzone_free),
+	{}
+};
+
+static struct kunit_suite test_suite = {
+	.name = "slub_test",
+	.init = test_init,
+	.exit = test_exit,
+	.test_cases = test_cases,
+};
+kunit_test_suite(test_suite);
+
+MODULE_LICENSE("GPL");
diff --git a/mm/slab.h b/mm/slab.h
index 18c1927cd196..9b690fa44cae 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -215,6 +215,7 @@ DECLARE_STATIC_KEY_TRUE(slub_debug_enabled);
 DECLARE_STATIC_KEY_FALSE(slub_debug_enabled);
 #endif
 extern void print_tracking(struct kmem_cache *s, void *object);
+long validate_slab_cache(struct kmem_cache *s);
 #else
 static inline void print_tracking(struct kmem_cache *s, void *object)
 {
diff --git a/mm/slub.c b/mm/slub.c
index feda53ae62ba..985fd6ef033c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -35,6 +35,7 @@
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
 #include <linux/random.h>
+#include <kunit/test.h>

 #include <trace/events/kmem.h>

@@ -447,6 +448,26 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page,
 static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)];
 static DEFINE_SPINLOCK(object_map_lock);

+#if IS_ENABLED(CONFIG_KUNIT)
+static bool slab_add_kunit_errors(void)
+{
+	struct kunit_resource *resource;
+
+	if (likely(!current->kunit_test))
+		return false;
+
+	resource = kunit_find_named_resource(current->kunit_test, "slab_errors");
+	if (!resource)
+		return false;
+
+	(*(int *)resource->data)++;
+	kunit_put_resource(resource);
+	return true;
+}
+#else
+static inline bool slab_add_kunit_errors(void) { return false; }
+#endif
+
 /*
  * Determine a map of object in use on a page.
  *
@@ -677,6 +698,9 @@ static void slab_fix(struct kmem_cache *s, char *fmt, ...)
 	struct va_format vaf;
 	va_list args;

+	if (slab_add_kunit_errors())
+		return;
+
 	va_start(args, fmt);
 	vaf.fmt = fmt;
 	vaf.va = &args;
@@ -740,6 +764,9 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 void object_err(struct kmem_cache *s, struct page *page,
 			u8 *object, char *reason)
 {
+	if (slab_add_kunit_errors())
+		return;
+
 	slab_bug(s, "%s", reason);
 	print_trailer(s, page, object);
 }
@@ -750,6 +777,9 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page,
 	va_list args;
 	char buf[100];

+	if (slab_add_kunit_errors())
+		return;
+
 	va_start(args, fmt);
 	vsnprintf(buf, sizeof(buf), fmt, args);
 	va_end(args);
@@ -799,12 +829,16 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
 	while (end > fault && end[-1] == value)
 		end--;

+	if (slab_add_kunit_errors())
+		goto skip_bug_print;
+
 	slab_bug(s, "%s overwritten", what);
 	pr_err("0x%p-0x%p @offset=%tu. First byte 0x%x instead of 0x%x\n",
 					fault, end - 1, fault - addr,
 					fault[0], value);
 	print_trailer(s, page, object);

+skip_bug_print:
 	restore_bytes(s, what, value, fault, end);
 	return 0;
 }
@@ -4662,9 +4696,11 @@ static int validate_slab_node(struct kmem_cache *s,
 		validate_slab(s, page);
 		count++;
 	}
-	if (count != n->nr_partial)
+	if (count != n->nr_partial) {
 		pr_err("SLUB %s: %ld partial slabs counted but counter=%ld\n",
 		       s->name, count, n->nr_partial);
+		slab_add_kunit_errors();
+	}

 	if (!(s->flags & SLAB_STORE_USER))
 		goto out;
@@ -4673,16 +4709,18 @@ static int validate_slab_node(struct kmem_cache *s,
 		validate_slab(s, page);
 		count++;
 	}
-	if (count != atomic_long_read(&n->nr_slabs))
+	if (count != atomic_long_read(&n->nr_slabs)) {
 		pr_err("SLUB: %s %ld slabs counted but counter=%ld\n",
 		       s->name, count, atomic_long_read(&n->nr_slabs));
+		slab_add_kunit_errors();
+	}

 out:
 	spin_unlock_irqrestore(&n->list_lock, flags);
 	return count;
 }

-static long validate_slab_cache(struct kmem_cache *s)
+long validate_slab_cache(struct kmem_cache *s)
 {
 	int node;
 	unsigned long count = 0;
@@ -4694,6 +4732,8 @@ static long validate_slab_cache(struct kmem_cache *s)

 	return count;
 }
+EXPORT_SYMBOL(validate_slab_cache);
+
 /*
  * Generate lists of code addresses where slabcache objects are allocated
  * and freed.
--
2.31.1.272.g89b43f80a5



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 3/3] slub: remove resiliency_test() function
  2021-05-11 15:07 [PATCH v5 1/3] kunit: make test->lock irq safe glittao
  2021-05-11 15:07 ` [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality glittao
@ 2021-05-11 15:07 ` glittao
  2021-05-12 10:28 ` [PATCH v5 1/3] kunit: make test->lock irq safe Vlastimil Babka
  2 siblings, 0 replies; 11+ messages in thread
From: glittao @ 2021-05-11 15:07 UTC (permalink / raw)
  To: brendanhiggins, cl, penberg, rientjes, iamjoonsoo.kim, akpm, vbabka
  Cc: linux-kernel, linux-kselftest, kunit-dev, linux-mm, elver,
	dlatypov, Oliver Glitta

From: Oliver Glitta <glittao@gmail.com>

Function resiliency_test() is hidden behind #ifdef
SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody
runs it.

This function is replaced with KUnit test for SLUB added
by the previous patch "selftests: add a KUnit test for SLUB
debugging functionality".

Signed-off-by: Oliver Glitta <glittao@gmail.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: David Rientjes <rientjes@google.com>
---
 mm/slub.c | 64 -------------------------------------------------------
 1 file changed, 64 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 985fd6ef033c..88e2c1847698 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -154,9 +154,6 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
  * - Variable sizing of the per node arrays
  */
 
-/* Enable to test recovery from slab corruption on boot */
-#undef SLUB_RESILIENCY_TEST
-
 /* Enable to log cmpxchg failures */
 #undef SLUB_DEBUG_CMPXCHG
 
@@ -4951,66 +4948,6 @@ static int list_locations(struct kmem_cache *s, char *buf,
 }
 #endif	/* CONFIG_SLUB_DEBUG */
 
-#ifdef SLUB_RESILIENCY_TEST
-static void __init resiliency_test(void)
-{
-	u8 *p;
-	int type = KMALLOC_NORMAL;
-
-	BUILD_BUG_ON(KMALLOC_MIN_SIZE > 16 || KMALLOC_SHIFT_HIGH < 10);
-
-	pr_err("SLUB resiliency testing\n");
-	pr_err("-----------------------\n");
-	pr_err("A. Corruption after allocation\n");
-
-	p = kzalloc(16, GFP_KERNEL);
-	p[16] = 0x12;
-	pr_err("\n1. kmalloc-16: Clobber Redzone/next pointer 0x12->0x%p\n\n",
-	       p + 16);
-
-	validate_slab_cache(kmalloc_caches[type][4]);
-
-	/* Hmmm... The next two are dangerous */
-	p = kzalloc(32, GFP_KERNEL);
-	p[32 + sizeof(void *)] = 0x34;
-	pr_err("\n2. kmalloc-32: Clobber next pointer/next slab 0x34 -> -0x%p\n",
-	       p);
-	pr_err("If allocated object is overwritten then not detectable\n\n");
-
-	validate_slab_cache(kmalloc_caches[type][5]);
-	p = kzalloc(64, GFP_KERNEL);
-	p += 64 + (get_cycles() & 0xff) * sizeof(void *);
-	*p = 0x56;
-	pr_err("\n3. kmalloc-64: corrupting random byte 0x56->0x%p\n",
-	       p);
-	pr_err("If allocated object is overwritten then not detectable\n\n");
-	validate_slab_cache(kmalloc_caches[type][6]);
-
-	pr_err("\nB. Corruption after free\n");
-	p = kzalloc(128, GFP_KERNEL);
-	kfree(p);
-	*p = 0x78;
-	pr_err("1. kmalloc-128: Clobber first word 0x78->0x%p\n\n", p);
-	validate_slab_cache(kmalloc_caches[type][7]);
-
-	p = kzalloc(256, GFP_KERNEL);
-	kfree(p);
-	p[50] = 0x9a;
-	pr_err("\n2. kmalloc-256: Clobber 50th byte 0x9a->0x%p\n\n", p);
-	validate_slab_cache(kmalloc_caches[type][8]);
-
-	p = kzalloc(512, GFP_KERNEL);
-	kfree(p);
-	p[512] = 0xab;
-	pr_err("\n3. kmalloc-512: Clobber redzone 0xab->0x%p\n\n", p);
-	validate_slab_cache(kmalloc_caches[type][9]);
-}
-#else
-#ifdef CONFIG_SYSFS
-static void resiliency_test(void) {};
-#endif
-#endif	/* SLUB_RESILIENCY_TEST */
-
 #ifdef CONFIG_SYSFS
 enum slab_stat_type {
 	SL_ALL,			/* All slabs */
@@ -5859,7 +5796,6 @@ static int __init slab_sysfs_init(void)
 	}
 
 	mutex_unlock(&slab_mutex);
-	resiliency_test();
 	return 0;
 }
 
-- 
2.31.1.272.g89b43f80a5



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality
  2021-05-11 15:07 ` [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality glittao
@ 2021-05-11 15:16   ` Marco Elver
  2021-05-12 10:30     ` Vlastimil Babka
  2021-05-12 12:24     ` Oliver Glitta
  2021-05-12 14:06   ` [PATCH] mm/slub, kunit: add a KUnit test for SLUB debugging functionality-fix glittao
  2021-05-13  4:44   ` [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality Andrew Morton
  2 siblings, 2 replies; 11+ messages in thread
From: Marco Elver @ 2021-05-11 15:16 UTC (permalink / raw)
  To: Oliver Glitta
  Cc: Brendan Higgins, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Vlastimil Babka, LKML,
	open list:KERNEL SELFTEST FRAMEWORK, KUnit Development,
	Linux Memory Management List, Daniel Latypov

On Tue, 11 May 2021 at 17:07, <glittao@gmail.com> wrote:
> From: Oliver Glitta <glittao@gmail.com>
>
> SLUB has resiliency_test() function which is hidden behind #ifdef
> SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody
> runs it. KUnit should be a proper replacement for it.
>
> Try changing byte in redzone after allocation and changing
> pointer to next free node, first byte, 50th byte and redzone
> byte. Check if validation finds errors.
>
> There are several differences from the original resiliency test:
> Tests create own caches with known state instead of corrupting
> shared kmalloc caches.
>
> The corruption of freepointer uses correct offset, the original
> resiliency test got broken with freepointer changes.
>
> Scratch changing random byte test, because it does not have
> meaning in this form where we need deterministic results.
>
> Add new option CONFIG_SLUB_KUNIT_TEST in Kconfig.
> Tests next_pointer, first_word and clobber_50th_byte do not run
> with KASAN option on. Because the test deliberately modifies non-allocated
> objects.
>
> Use kunit_resource to count errors in cache and silence bug reports.
> Count error whenever slab_bug() or slab_fix() is called or when
> the count of pages is wrong.
>
> Signed-off-by: Oliver Glitta <glittao@gmail.com>

I think I had already reviewed v4, and the changes here are fine:

Reviewed-by: Marco Elver <elver@google.com>

Others who had reviewed/acked v4, probably need to re-ack/review.
Note, I think if you addressed the comments and didn't change much
else, you can typically carry the acks/reviews, unless the other
person changed their mind explicitly.

> ---
> Changes since v4
> Use two tests with KASAN dependency.
> Remove setting current test during init and exit.
>
> Changes since v3
>
> Use kunit_resource to silence bug reports and count errors suggested by
> Marco Elver.
> Make the test depends on !KASAN thanks to report from the kernel test robot.
>
> Changes since v2
>
> Use bit operation & instead of logical && as reported by kernel test
> robot and Dan Carpenter
>
> Changes since v1
>
> Conversion from kselftest to KUnit test suggested by Marco Elver.
> Error silencing.
> Error counting improvements.
>
>  lib/Kconfig.debug |  12 ++++
>  lib/Makefile      |   1 +
>  lib/slub_kunit.c  | 155 ++++++++++++++++++++++++++++++++++++++++++++++
>  mm/slab.h         |   1 +
>  mm/slub.c         |  46 +++++++++++++-
>  5 files changed, 212 insertions(+), 3 deletions(-)
>  create mode 100644 lib/slub_kunit.c
>
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index 678c13967580..7723f58a9394 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -2429,6 +2429,18 @@ config BITS_TEST
>
>           If unsure, say N.
>
> +config SLUB_KUNIT_TEST
> +       tristate "KUnit test for SLUB cache error detection" if !KUNIT_ALL_TESTS
> +       depends on SLUB_DEBUG && KUNIT
> +       default KUNIT_ALL_TESTS
> +       help
> +         This builds SLUB allocator unit test.
> +         Tests SLUB cache debugging functionality.
> +         For more information on KUnit and unit tests in general please refer
> +         to the KUnit documentation in Documentation/dev-tools/kunit/.
> +
> +         If unsure, say N.
> +
>  config TEST_UDELAY
>         tristate "udelay test driver"
>         help
> diff --git a/lib/Makefile b/lib/Makefile
> index e11cfc18b6c0..386215dcb0a0 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -353,5 +353,6 @@ obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o
>  obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o
>  obj-$(CONFIG_BITS_TEST) += test_bits.o
>  obj-$(CONFIG_CMDLINE_KUNIT_TEST) += cmdline_kunit.o
> +obj-$(CONFIG_SLUB_KUNIT_TEST) += slub_kunit.o
>
>  obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o
> diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> new file mode 100644
> index 000000000000..f28965f64ef6
> --- /dev/null
> +++ b/lib/slub_kunit.c
> @@ -0,0 +1,155 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <kunit/test.h>
> +#include <linux/mm.h>
> +#include <linux/slab.h>
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include "../mm/slab.h"
> +
> +static struct kunit_resource resource;
> +static int slab_errors;
> +
> +static void test_clobber_zone(struct kunit *test)
> +{
> +       struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc", 64, 0,
> +                               SLAB_RED_ZONE, NULL);
> +       u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> +
> +       kasan_disable_current();
> +       p[64] = 0x12;
> +
> +       validate_slab_cache(s);
> +       KUNIT_EXPECT_EQ(test, 2, slab_errors);
> +
> +       kasan_enable_current();
> +       kmem_cache_free(s, p);
> +       kmem_cache_destroy(s);
> +}
> +
> +#ifndef CONFIG_KASAN
> +static void test_next_pointer(struct kunit *test)
> +{
> +       struct kmem_cache *s = kmem_cache_create("TestSlub_next_ptr_free", 64, 0,
> +                               SLAB_POISON, NULL);
> +       u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> +       unsigned long tmp;
> +       unsigned long *ptr_addr;
> +
> +       kmem_cache_free(s, p);
> +
> +       ptr_addr = (unsigned long *)(p + s->offset);
> +       tmp = *ptr_addr;
> +       p[s->offset] = 0x12;
> +
> +       /*
> +        * Expecting three errors.
> +        * One for the corrupted freechain and the other one for the wrong
> +        * count of objects in use. The third error is fixing broken cache.
> +        */
> +       validate_slab_cache(s);
> +       KUNIT_EXPECT_EQ(test, 3, slab_errors);
> +
> +       /*
> +        * Try to repair corrupted freepointer.
> +        * Still expecting two errors. The first for the wrong count
> +        * of objects in use.
> +        * The second error is for fixing broken cache.
> +        */
> +       *ptr_addr = tmp;
> +       slab_errors = 0;
> +
> +       validate_slab_cache(s);
> +       KUNIT_EXPECT_EQ(test, 2, slab_errors);
> +
> +       /*
> +        * Previous validation repaired the count of objects in use.
> +        * Now expecting no error.
> +        */
> +       slab_errors = 0;
> +       validate_slab_cache(s);
> +       KUNIT_EXPECT_EQ(test, 0, slab_errors);
> +
> +       kmem_cache_destroy(s);
> +}
> +
> +static void test_first_word(struct kunit *test)
> +{
> +       struct kmem_cache *s = kmem_cache_create("TestSlub_1th_word_free", 64, 0,
> +                               SLAB_POISON, NULL);
> +       u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> +
> +       kmem_cache_free(s, p);
> +       *p = 0x78;
> +
> +       validate_slab_cache(s);
> +       KUNIT_EXPECT_EQ(test, 2, slab_errors);
> +
> +       kmem_cache_destroy(s);
> +}
> +
> +static void test_clobber_50th_byte(struct kunit *test)
> +{
> +       struct kmem_cache *s = kmem_cache_create("TestSlub_50th_word_free", 64, 0,
> +                               SLAB_POISON, NULL);
> +       u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> +
> +       kmem_cache_free(s, p);
> +       p[50] = 0x9a;
> +
> +       validate_slab_cache(s);
> +       KUNIT_EXPECT_EQ(test, 2, slab_errors);
> +
> +       kmem_cache_destroy(s);
> +}
> +#endif
> +
> +static void test_clobber_redzone_free(struct kunit *test)
> +{
> +       struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free", 64, 0,
> +                               SLAB_RED_ZONE, NULL);
> +       u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> +
> +       kasan_disable_current();
> +       kmem_cache_free(s, p);
> +       p[64] = 0xab;
> +
> +       validate_slab_cache(s);
> +       KUNIT_EXPECT_EQ(test, 2, slab_errors);
> +
> +       kasan_enable_current();
> +       kmem_cache_destroy(s);
> +}
> +
> +static int test_init(struct kunit *test)
> +{
> +       slab_errors = 0;
> +
> +       kunit_add_named_resource(test, NULL, NULL, &resource,
> +                                       "slab_errors", &slab_errors);
> +       return 0;
> +}
> +
> +static void test_exit(struct kunit *test) {}

Does removing test_exit() and not setting it below work?

> +static struct kunit_case test_cases[] = {
> +       KUNIT_CASE(test_clobber_zone),
> +
> +#ifndef CONFIG_KASAN
> +       KUNIT_CASE(test_next_pointer),
> +       KUNIT_CASE(test_first_word),
> +       KUNIT_CASE(test_clobber_50th_byte),
> +#endif
> +
> +       KUNIT_CASE(test_clobber_redzone_free),
> +       {}

This is better, and tells us which tests exactly were the ones causing
problems with KASAN.


> +};
> +
> +static struct kunit_suite test_suite = {
> +       .name = "slub_test",
> +       .init = test_init,
> +       .exit = test_exit,
> +       .test_cases = test_cases,
> +};
> +kunit_test_suite(test_suite);
> +
> +MODULE_LICENSE("GPL");
> diff --git a/mm/slab.h b/mm/slab.h
> index 18c1927cd196..9b690fa44cae 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -215,6 +215,7 @@ DECLARE_STATIC_KEY_TRUE(slub_debug_enabled);
>  DECLARE_STATIC_KEY_FALSE(slub_debug_enabled);
>  #endif
>  extern void print_tracking(struct kmem_cache *s, void *object);
> +long validate_slab_cache(struct kmem_cache *s);
>  #else
>  static inline void print_tracking(struct kmem_cache *s, void *object)
>  {
> diff --git a/mm/slub.c b/mm/slub.c
> index feda53ae62ba..985fd6ef033c 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -35,6 +35,7 @@
>  #include <linux/prefetch.h>
>  #include <linux/memcontrol.h>
>  #include <linux/random.h>
> +#include <kunit/test.h>
>
>  #include <trace/events/kmem.h>
>
> @@ -447,6 +448,26 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page,
>  static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)];
>  static DEFINE_SPINLOCK(object_map_lock);
>
> +#if IS_ENABLED(CONFIG_KUNIT)
> +static bool slab_add_kunit_errors(void)
> +{
> +       struct kunit_resource *resource;
> +
> +       if (likely(!current->kunit_test))
> +               return false;
> +
> +       resource = kunit_find_named_resource(current->kunit_test, "slab_errors");
> +       if (!resource)
> +               return false;
> +
> +       (*(int *)resource->data)++;
> +       kunit_put_resource(resource);
> +       return true;
> +}
> +#else
> +static inline bool slab_add_kunit_errors(void) { return false; }
> +#endif
> +
>  /*
>   * Determine a map of object in use on a page.
>   *
> @@ -677,6 +698,9 @@ static void slab_fix(struct kmem_cache *s, char *fmt, ...)
>         struct va_format vaf;
>         va_list args;
>
> +       if (slab_add_kunit_errors())
> +               return;
> +
>         va_start(args, fmt);
>         vaf.fmt = fmt;
>         vaf.va = &args;
> @@ -740,6 +764,9 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
>  void object_err(struct kmem_cache *s, struct page *page,
>                         u8 *object, char *reason)
>  {
> +       if (slab_add_kunit_errors())
> +               return;
> +
>         slab_bug(s, "%s", reason);
>         print_trailer(s, page, object);
>  }
> @@ -750,6 +777,9 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page,
>         va_list args;
>         char buf[100];
>
> +       if (slab_add_kunit_errors())
> +               return;
> +
>         va_start(args, fmt);
>         vsnprintf(buf, sizeof(buf), fmt, args);
>         va_end(args);
> @@ -799,12 +829,16 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page,
>         while (end > fault && end[-1] == value)
>                 end--;
>
> +       if (slab_add_kunit_errors())
> +               goto skip_bug_print;
> +
>         slab_bug(s, "%s overwritten", what);
>         pr_err("0x%p-0x%p @offset=%tu. First byte 0x%x instead of 0x%x\n",
>                                         fault, end - 1, fault - addr,
>                                         fault[0], value);
>         print_trailer(s, page, object);
>
> +skip_bug_print:
>         restore_bytes(s, what, value, fault, end);
>         return 0;
>  }
> @@ -4662,9 +4696,11 @@ static int validate_slab_node(struct kmem_cache *s,
>                 validate_slab(s, page);
>                 count++;
>         }
> -       if (count != n->nr_partial)
> +       if (count != n->nr_partial) {
>                 pr_err("SLUB %s: %ld partial slabs counted but counter=%ld\n",
>                        s->name, count, n->nr_partial);
> +               slab_add_kunit_errors();
> +       }
>
>         if (!(s->flags & SLAB_STORE_USER))
>                 goto out;
> @@ -4673,16 +4709,18 @@ static int validate_slab_node(struct kmem_cache *s,
>                 validate_slab(s, page);
>                 count++;
>         }
> -       if (count != atomic_long_read(&n->nr_slabs))
> +       if (count != atomic_long_read(&n->nr_slabs)) {
>                 pr_err("SLUB: %s %ld slabs counted but counter=%ld\n",
>                        s->name, count, atomic_long_read(&n->nr_slabs));
> +               slab_add_kunit_errors();
> +       }
>
>  out:
>         spin_unlock_irqrestore(&n->list_lock, flags);
>         return count;
>  }
>
> -static long validate_slab_cache(struct kmem_cache *s)
> +long validate_slab_cache(struct kmem_cache *s)
>  {
>         int node;
>         unsigned long count = 0;
> @@ -4694,6 +4732,8 @@ static long validate_slab_cache(struct kmem_cache *s)
>
>         return count;
>  }
> +EXPORT_SYMBOL(validate_slab_cache);
> +
>  /*
>   * Generate lists of code addresses where slabcache objects are allocated
>   * and freed.
> --
> 2.31.1.272.g89b43f80a5
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/3] kunit: make test->lock irq safe
  2021-05-11 15:07 [PATCH v5 1/3] kunit: make test->lock irq safe glittao
  2021-05-11 15:07 ` [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality glittao
  2021-05-11 15:07 ` [PATCH v5 3/3] slub: remove resiliency_test() function glittao
@ 2021-05-12 10:28 ` Vlastimil Babka
  2 siblings, 0 replies; 11+ messages in thread
From: Vlastimil Babka @ 2021-05-12 10:28 UTC (permalink / raw)
  To: glittao, brendanhiggins, cl, penberg, rientjes, iamjoonsoo.kim, akpm
  Cc: linux-kernel, linux-kselftest, kunit-dev, linux-mm, elver, dlatypov

On 5/11/21 5:07 PM, glittao@gmail.com wrote:
> From: Vlastimil Babka <vbabka@suse.cz>
> 
> The upcoming SLUB kunit test will be calling kunit_find_named_resource() from
> a context with disabled interrupts. That means kunit's test->lock needs to be
> IRQ safe to avoid potential deadlocks and lockdep splats.
> 
> This patch therefore changes the test->lock usage to spin_lock_irqsave()
> and spin_unlock_irqrestore().
> 
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Oliver Glitta <glittao@gmail.com>

Note v4 had

Reviewed-by: Brendan Higgins <brendanhiggins@google.com>

and it's unchanged AFAIK.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality
  2021-05-11 15:16   ` Marco Elver
@ 2021-05-12 10:30     ` Vlastimil Babka
  2021-05-12 12:24     ` Oliver Glitta
  1 sibling, 0 replies; 11+ messages in thread
From: Vlastimil Babka @ 2021-05-12 10:30 UTC (permalink / raw)
  To: Marco Elver, Oliver Glitta
  Cc: Brendan Higgins, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, LKML,
	open list:KERNEL SELFTEST FRAMEWORK, KUnit Development,
	Linux Memory Management List, Daniel Latypov

On 5/11/21 5:16 PM, Marco Elver wrote:
> On Tue, 11 May 2021 at 17:07, <glittao@gmail.com> wrote:
>> From: Oliver Glitta <glittao@gmail.com>
>>
>> SLUB has resiliency_test() function which is hidden behind #ifdef
>> SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody
>> runs it. KUnit should be a proper replacement for it.
>>
>> Try changing byte in redzone after allocation and changing
>> pointer to next free node, first byte, 50th byte and redzone
>> byte. Check if validation finds errors.
>>
>> There are several differences from the original resiliency test:
>> Tests create own caches with known state instead of corrupting
>> shared kmalloc caches.
>>
>> The corruption of freepointer uses correct offset, the original
>> resiliency test got broken with freepointer changes.
>>
>> Scratch changing random byte test, because it does not have
>> meaning in this form where we need deterministic results.
>>
>> Add new option CONFIG_SLUB_KUNIT_TEST in Kconfig.
>> Tests next_pointer, first_word and clobber_50th_byte do not run
>> with KASAN option on. Because the test deliberately modifies non-allocated
>> objects.
>>
>> Use kunit_resource to count errors in cache and silence bug reports.
>> Count error whenever slab_bug() or slab_fix() is called or when
>> the count of pages is wrong.
>>
>> Signed-off-by: Oliver Glitta <glittao@gmail.com>

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>

> I think I had already reviewed v4, and the changes here are fine:
> 
> Reviewed-by: Marco Elver <elver@google.com>
> 
> Others who had reviewed/acked v4, probably need to re-ack/review.
> Note, I think if you addressed the comments and didn't change much
> else, you can typically carry the acks/reviews, unless the other
> person changed their mind explicitly.

FTR, besides me and Marco, v4 had also:

Acked-by: Daniel Latypov <dlatypov@google.com>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality
  2021-05-11 15:16   ` Marco Elver
  2021-05-12 10:30     ` Vlastimil Babka
@ 2021-05-12 12:24     ` Oliver Glitta
  1 sibling, 0 replies; 11+ messages in thread
From: Oliver Glitta @ 2021-05-12 12:24 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrew Morton, Brendan Higgins, Christoph Lameter,
	Daniel Latypov, David Rientjes, Joonsoo Kim, KUnit Development,
	LKML, Linux Memory Management List, Pekka Enberg,
	Vlastimil Babka, open list:KERNEL SELFTEST FRAMEWORK

[-- Attachment #1: Type: text/plain, Size: 14903 bytes --]

ut 11. 5. 2021 o 17:16 Marco Elver <elver@google.com> napísal(a):

>
> On Tue, 11 May 2021 at 17:07, <glittao@gmail.com> wrote:
> > From: Oliver Glitta <glittao@gmail.com>
> >
> > SLUB has resiliency_test() function which is hidden behind #ifdef
> > SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody
> > runs it. KUnit should be a proper replacement for it.
> >
> > Try changing byte in redzone after allocation and changing
> > pointer to next free node, first byte, 50th byte and redzone
> > byte. Check if validation finds errors.
> >
> > There are several differences from the original resiliency test:
> > Tests create own caches with known state instead of corrupting
> > shared kmalloc caches.
> >
> > The corruption of freepointer uses correct offset, the original
> > resiliency test got broken with freepointer changes.
> >
> > Scratch changing random byte test, because it does not have
> > meaning in this form where we need deterministic results.
> >
> > Add new option CONFIG_SLUB_KUNIT_TEST in Kconfig.
> > Tests next_pointer, first_word and clobber_50th_byte do not run
> > with KASAN option on. Because the test deliberately modifies
non-allocated
> > objects.
> >
> > Use kunit_resource to count errors in cache and silence bug reports.
> > Count error whenever slab_bug() or slab_fix() is called or when
> > the count of pages is wrong.
> >
> > Signed-off-by: Oliver Glitta <glittao@gmail.com>
>
> I think I had already reviewed v4, and the changes here are fine:
>
> Reviewed-by: Marco Elver <elver@google.com>

Thank you again.

I’m sorry about, I forgot to add this tags.

> Others who had reviewed/acked v4, probably need to re-ack/review.
> Note, I think if you addressed the comments and didn't change much
> else, you can typically carry the acks/reviews, unless the other
> person changed their mind explicitly.
>
> > ---
> > Changes since v4
> > Use two tests with KASAN dependency.
> > Remove setting current test during init and exit.
> >
> > Changes since v3
> >
> > Use kunit_resource to silence bug reports and count errors suggested by
> > Marco Elver.
> > Make the test depends on !KASAN thanks to report from the kernel test
robot.
> >
> > Changes since v2
> >
> > Use bit operation & instead of logical && as reported by kernel test
> > robot and Dan Carpenter
> >
> > Changes since v1
> >
> > Conversion from kselftest to KUnit test suggested by Marco Elver.
> > Error silencing.
> > Error counting improvements.
> >
> >  lib/Kconfig.debug |  12 ++++
> >  lib/Makefile      |   1 +
> >  lib/slub_kunit.c  | 155 ++++++++++++++++++++++++++++++++++++++++++++++
> >  mm/slab.h         |   1 +
> >  mm/slub.c         |  46 +++++++++++++-
> >  5 files changed, 212 insertions(+), 3 deletions(-)
> >  create mode 100644 lib/slub_kunit.c
> >
> > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> > index 678c13967580..7723f58a9394 100644
> > --- a/lib/Kconfig.debug
> > +++ b/lib/Kconfig.debug
> > @@ -2429,6 +2429,18 @@ config BITS_TEST
> >
> >           If unsure, say N.
> >
> > +config SLUB_KUNIT_TEST
> > +       tristate "KUnit test for SLUB cache error detection" if
!KUNIT_ALL_TESTS
> > +       depends on SLUB_DEBUG && KUNIT
> > +       default KUNIT_ALL_TESTS
> > +       help
> > +         This builds SLUB allocator unit test.
> > +         Tests SLUB cache debugging functionality.
> > +         For more information on KUnit and unit tests in general
please refer
> > +         to the KUnit documentation in Documentation/dev-tools/kunit/.
> > +
> > +         If unsure, say N.
> > +
> >  config TEST_UDELAY
> >         tristate "udelay test driver"
> >         help
> > diff --git a/lib/Makefile b/lib/Makefile
> > index e11cfc18b6c0..386215dcb0a0 100644
> > --- a/lib/Makefile
> > +++ b/lib/Makefile
> > @@ -353,5 +353,6 @@ obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o
> >  obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o
> >  obj-$(CONFIG_BITS_TEST) += test_bits.o
> >  obj-$(CONFIG_CMDLINE_KUNIT_TEST) += cmdline_kunit.o
> > +obj-$(CONFIG_SLUB_KUNIT_TEST) += slub_kunit.o
> >
> >  obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o
> > diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> > new file mode 100644
> > index 000000000000..f28965f64ef6
> > --- /dev/null
> > +++ b/lib/slub_kunit.c
> > @@ -0,0 +1,155 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +#include <kunit/test.h>
> > +#include <linux/mm.h>
> > +#include <linux/slab.h>
> > +#include <linux/module.h>
> > +#include <linux/kernel.h>
> > +#include "../mm/slab.h"
> > +
> > +static struct kunit_resource resource;
> > +static int slab_errors;
> > +
> > +static void test_clobber_zone(struct kunit *test)
> > +{
> > +       struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc",
64, 0,
> > +                               SLAB_RED_ZONE, NULL);
> > +       u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> > +
> > +       kasan_disable_current();
> > +       p[64] = 0x12;
> > +
> > +       validate_slab_cache(s);
> > +       KUNIT_EXPECT_EQ(test, 2, slab_errors);
> > +
> > +       kasan_enable_current();
> > +       kmem_cache_free(s, p);
> > +       kmem_cache_destroy(s);
> > +}
> > +
> > +#ifndef CONFIG_KASAN
> > +static void test_next_pointer(struct kunit *test)
> > +{
> > +       struct kmem_cache *s =
kmem_cache_create("TestSlub_next_ptr_free", 64, 0,
> > +                               SLAB_POISON, NULL);
> > +       u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> > +       unsigned long tmp;
> > +       unsigned long *ptr_addr;
> > +
> > +       kmem_cache_free(s, p);
> > +
> > +       ptr_addr = (unsigned long *)(p + s->offset);
> > +       tmp = *ptr_addr;
> > +       p[s->offset] = 0x12;
> > +
> > +       /*
> > +        * Expecting three errors.
> > +        * One for the corrupted freechain and the other one for the
wrong
> > +        * count of objects in use. The third error is fixing broken
cache.
> > +        */
> > +       validate_slab_cache(s);
> > +       KUNIT_EXPECT_EQ(test, 3, slab_errors);
> > +
> > +       /*
> > +        * Try to repair corrupted freepointer.
> > +        * Still expecting two errors. The first for the wrong count
> > +        * of objects in use.
> > +        * The second error is for fixing broken cache.
> > +        */
> > +       *ptr_addr = tmp;
> > +       slab_errors = 0;
> > +
> > +       validate_slab_cache(s);
> > +       KUNIT_EXPECT_EQ(test, 2, slab_errors);
> > +
> > +       /*
> > +        * Previous validation repaired the count of objects in use.
> > +        * Now expecting no error.
> > +        */
> > +       slab_errors = 0;
> > +       validate_slab_cache(s);
> > +       KUNIT_EXPECT_EQ(test, 0, slab_errors);
> > +
> > +       kmem_cache_destroy(s);
> > +}
> > +
> > +static void test_first_word(struct kunit *test)
> > +{
> > +       struct kmem_cache *s =
kmem_cache_create("TestSlub_1th_word_free", 64, 0,
> > +                               SLAB_POISON, NULL);
> > +       u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> > +
> > +       kmem_cache_free(s, p);
> > +       *p = 0x78;
> > +
> > +       validate_slab_cache(s);
> > +       KUNIT_EXPECT_EQ(test, 2, slab_errors);
> > +
> > +       kmem_cache_destroy(s);
> > +}
> > +
> > +static void test_clobber_50th_byte(struct kunit *test)
> > +{
> > +       struct kmem_cache *s =
kmem_cache_create("TestSlub_50th_word_free", 64, 0,
> > +                               SLAB_POISON, NULL);
> > +       u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> > +
> > +       kmem_cache_free(s, p);
> > +       p[50] = 0x9a;
> > +
> > +       validate_slab_cache(s);
> > +       KUNIT_EXPECT_EQ(test, 2, slab_errors);
> > +
> > +       kmem_cache_destroy(s);
> > +}
> > +#endif
> > +
> > +static void test_clobber_redzone_free(struct kunit *test)
> > +{
> > +       struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free",
64, 0,
> > +                               SLAB_RED_ZONE, NULL);
> > +       u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
> > +
> > +       kasan_disable_current();
> > +       kmem_cache_free(s, p);
> > +       p[64] = 0xab;
> > +
> > +       validate_slab_cache(s);
> > +       KUNIT_EXPECT_EQ(test, 2, slab_errors);
> > +
> > +       kasan_enable_current();
> > +       kmem_cache_destroy(s);
> > +}
> > +
> > +static int test_init(struct kunit *test)
> > +{
> > +       slab_errors = 0;
> > +
> > +       kunit_add_named_resource(test, NULL, NULL, &resource,
> > +                                       "slab_errors", &slab_errors);
> > +       return 0;
> > +}
> > +
> > +static void test_exit(struct kunit *test) {}
>
> Does removing test_exit() and not setting it below work?

Yes this works. Thank you for that. I try to remove function but I didn't
think about not setting it, so it didn't work. I will fix it.

> > +static struct kunit_case test_cases[] = {
> > +       KUNIT_CASE(test_clobber_zone),
> > +
> > +#ifndef CONFIG_KASAN
> > +       KUNIT_CASE(test_next_pointer),
> > +       KUNIT_CASE(test_first_word),
> > +       KUNIT_CASE(test_clobber_50th_byte),
> > +#endif
> > +
> > +       KUNIT_CASE(test_clobber_redzone_free),
> > +       {}
>
> This is better, and tells us which tests exactly were the ones causing
> problems with KASAN.
>
>
> > +};
> > +
> > +static struct kunit_suite test_suite = {
> > +       .name = "slub_test",
> > +       .init = test_init,
> > +       .exit = test_exit,
> > +       .test_cases = test_cases,
> > +};
> > +kunit_test_suite(test_suite);
> > +
> > +MODULE_LICENSE("GPL");
> > diff --git a/mm/slab.h b/mm/slab.h
> > index 18c1927cd196..9b690fa44cae 100644
> > --- a/mm/slab.h
> > +++ b/mm/slab.h
> > @@ -215,6 +215,7 @@ DECLARE_STATIC_KEY_TRUE(slub_debug_enabled);
> >  DECLARE_STATIC_KEY_FALSE(slub_debug_enabled);
> >  #endif
> >  extern void print_tracking(struct kmem_cache *s, void *object);
> > +long validate_slab_cache(struct kmem_cache *s);
> >  #else
> >  static inline void print_tracking(struct kmem_cache *s, void *object)
> >  {
> > diff --git a/mm/slub.c b/mm/slub.c
> > index feda53ae62ba..985fd6ef033c 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -35,6 +35,7 @@
> >  #include <linux/prefetch.h>
> >  #include <linux/memcontrol.h>
> >  #include <linux/random.h>
> > +#include <kunit/test.h>
> >
> >  #include <trace/events/kmem.h>
> >
> > @@ -447,6 +448,26 @@ static inline bool cmpxchg_double_slab(struct
kmem_cache *s, struct page *page,
> >  static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)];
> >  static DEFINE_SPINLOCK(object_map_lock);
> >
> > +#if IS_ENABLED(CONFIG_KUNIT)
> > +static bool slab_add_kunit_errors(void)
> > +{
> > +       struct kunit_resource *resource;
> > +
> > +       if (likely(!current->kunit_test))
> > +               return false;
> > +
> > +       resource = kunit_find_named_resource(current->kunit_test,
"slab_errors");
> > +       if (!resource)
> > +               return false;
> > +
> > +       (*(int *)resource->data)++;
> > +       kunit_put_resource(resource);
> > +       return true;
> > +}
> > +#else
> > +static inline bool slab_add_kunit_errors(void) { return false; }
> > +#endif
> > +
> >  /*
> >   * Determine a map of object in use on a page.
> >   *
> > @@ -677,6 +698,9 @@ static void slab_fix(struct kmem_cache *s, char
*fmt, ...)
> >         struct va_format vaf;
> >         va_list args;
> >
> > +       if (slab_add_kunit_errors())
> > +               return;
> > +
> >         va_start(args, fmt);
> >         vaf.fmt = fmt;
> >         vaf.va = &args;
> > @@ -740,6 +764,9 @@ static void print_trailer(struct kmem_cache *s,
struct page *page, u8 *p)
> >  void object_err(struct kmem_cache *s, struct page *page,
> >                         u8 *object, char *reason)
> >  {
> > +       if (slab_add_kunit_errors())
> > +               return;
> > +
> >         slab_bug(s, "%s", reason);
> >         print_trailer(s, page, object);
> >  }
> > @@ -750,6 +777,9 @@ static __printf(3, 4) void slab_err(struct
kmem_cache *s, struct page *page,
> >         va_list args;
> >         char buf[100];
> >
> > +       if (slab_add_kunit_errors())
> > +               return;
> > +
> >         va_start(args, fmt);
> >         vsnprintf(buf, sizeof(buf), fmt, args);
> >         va_end(args);
> > @@ -799,12 +829,16 @@ static int check_bytes_and_report(struct
kmem_cache *s, struct page *page,
> >         while (end > fault && end[-1] == value)
> >                 end--;
> >
> > +       if (slab_add_kunit_errors())
> > +               goto skip_bug_print;
> > +
> >         slab_bug(s, "%s overwritten", what);
> >         pr_err("0x%p-0x%p @offset=%tu. First byte 0x%x instead of
0x%x\n",
> >                                         fault, end - 1, fault - addr,
> >                                         fault[0], value);
> >         print_trailer(s, page, object);
> >
> > +skip_bug_print:
> >         restore_bytes(s, what, value, fault, end);
> >         return 0;
> >  }
> > @@ -4662,9 +4696,11 @@ static int validate_slab_node(struct kmem_cache
*s,
> >                 validate_slab(s, page);
> >                 count++;
> >         }
> > -       if (count != n->nr_partial)
> > +       if (count != n->nr_partial) {
> >                 pr_err("SLUB %s: %ld partial slabs counted but
counter=%ld\n",
> >                        s->name, count, n->nr_partial);
> > +               slab_add_kunit_errors();
> > +       }
> >
> >         if (!(s->flags & SLAB_STORE_USER))
> >                 goto out;
> > @@ -4673,16 +4709,18 @@ static int validate_slab_node(struct kmem_cache
*s,
> >                 validate_slab(s, page);
> >                 count++;
> >         }
> > -       if (count != atomic_long_read(&n->nr_slabs))
> > +       if (count != atomic_long_read(&n->nr_slabs)) {
> >                 pr_err("SLUB: %s %ld slabs counted but counter=%ld\n",
> >                        s->name, count, atomic_long_read(&n->nr_slabs));
> > +               slab_add_kunit_errors();
> > +       }
> >
> >  out:
> >         spin_unlock_irqrestore(&n->list_lock, flags);
> >         return count;
> >  }
> >
> > -static long validate_slab_cache(struct kmem_cache *s)
> > +long validate_slab_cache(struct kmem_cache *s)
> >  {
> >         int node;
> >         unsigned long count = 0;
> > @@ -4694,6 +4732,8 @@ static long validate_slab_cache(struct kmem_cache
*s)
> >
> >         return count;
> >  }
> > +EXPORT_SYMBOL(validate_slab_cache);
> > +
> >  /*
> >   * Generate lists of code addresses where slabcache objects are
allocated
> >   * and freed.
> > --
> > 2.31.1.272.g89b43f80a5
> >

[-- Attachment #2: Type: text/html, Size: 20846 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] mm/slub, kunit: add a KUnit test for SLUB debugging functionality-fix
  2021-05-11 15:07 ` [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality glittao
  2021-05-11 15:16   ` Marco Elver
@ 2021-05-12 14:06   ` glittao
  2021-05-13  4:44   ` [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality Andrew Morton
  2 siblings, 0 replies; 11+ messages in thread
From: glittao @ 2021-05-12 14:06 UTC (permalink / raw)
  To: brendanhiggins, cl, penberg, rientjes, iamjoonsoo.kim, akpm, vbabka
  Cc: linux-kernel, linux-kselftest, kunit-dev, linux-mm, elver,
	dlatypov, Oliver Glitta

From: Oliver Glitta <glittao@gmail.com>

Remove unused function test_exit(), from SLUB KUnit test.

Reported-by: Marco Elver <elver@google.com>
Signed-off-by: Oliver Glitta <glittao@gmail.com>
---
 lib/slub_kunit.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
index f28965f64ef6..8662dc6cb509 100644
--- a/lib/slub_kunit.c
+++ b/lib/slub_kunit.c
@@ -129,8 +129,6 @@ static int test_init(struct kunit *test)
 	return 0;
 }
 
-static void test_exit(struct kunit *test) {}
-
 static struct kunit_case test_cases[] = {
 	KUNIT_CASE(test_clobber_zone),
 
@@ -147,7 +145,6 @@ static struct kunit_case test_cases[] = {
 static struct kunit_suite test_suite = {
 	.name = "slub_test",
 	.init = test_init,
-	.exit = test_exit,
 	.test_cases = test_cases,
 };
 kunit_test_suite(test_suite);
-- 
2.31.1.272.g89b43f80a5



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality
  2021-05-11 15:07 ` [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality glittao
  2021-05-11 15:16   ` Marco Elver
  2021-05-12 14:06   ` [PATCH] mm/slub, kunit: add a KUnit test for SLUB debugging functionality-fix glittao
@ 2021-05-13  4:44   ` Andrew Morton
  2021-05-13  8:54     ` Marco Elver
  2021-05-13  9:32     ` Oliver Glitta
  2 siblings, 2 replies; 11+ messages in thread
From: Andrew Morton @ 2021-05-13  4:44 UTC (permalink / raw)
  To: glittao
  Cc: brendanhiggins, cl, penberg, rientjes, iamjoonsoo.kim, vbabka,
	linux-kernel, linux-kselftest, kunit-dev, linux-mm, elver,
	dlatypov

On Tue, 11 May 2021 17:07:33 +0200 glittao@gmail.com wrote:

> From: Oliver Glitta <glittao@gmail.com>
> 
> SLUB has resiliency_test() function which is hidden behind #ifdef
> SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody
> runs it. KUnit should be a proper replacement for it.
> 
> Try changing byte in redzone after allocation and changing
> pointer to next free node, first byte, 50th byte and redzone
> byte. Check if validation finds errors.
> 
> There are several differences from the original resiliency test:
> Tests create own caches with known state instead of corrupting
> shared kmalloc caches.
> 
> The corruption of freepointer uses correct offset, the original
> resiliency test got broken with freepointer changes.
> 
> Scratch changing random byte test, because it does not have
> meaning in this form where we need deterministic results.
> 
> Add new option CONFIG_SLUB_KUNIT_TEST in Kconfig.
> Tests next_pointer, first_word and clobber_50th_byte do not run
> with KASAN option on. Because the test deliberately modifies non-allocated
> objects.
> 
> Use kunit_resource to count errors in cache and silence bug reports.
> Count error whenever slab_bug() or slab_fix() is called or when
> the count of pages is wrong.
> 
> ...
>
>  lib/slub_kunit.c  | 155 ++++++++++++++++++++++++++++++++++++++++++++++
>  mm/slab.h         |   1 +
>  mm/slub.c         |  46 +++++++++++++-
>  5 files changed, 212 insertions(+), 3 deletions(-)
>  create mode 100644 lib/slub_kunit.c
> 
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index 678c13967580..7723f58a9394 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -2429,6 +2429,18 @@ config BITS_TEST
> 
>  	  If unsure, say N.
> 
> +config SLUB_KUNIT_TEST
> +	tristate "KUnit test for SLUB cache error detection" if !KUNIT_ALL_TESTS

This means it can be compiled as a kernel module.  Did you runtime test the
code as a module?

ERROR: modpost: "kasan_enable_current" [lib/slub_kunit.ko] undefined!
ERROR: modpost: "kasan_disable_current" [lib/slub_kunit.ko] undefined!

--- a/mm/kasan/common.c~a
+++ a/mm/kasan/common.c
@@ -51,11 +51,14 @@ void kasan_enable_current(void)
 {
 	current->kasan_depth++;
 }
+EXPORT_SYMBOL(kasan_enable_current);
 
 void kasan_disable_current(void)
 {
 	current->kasan_depth--;
 }
+EXPORT_SYMBOL(kasan_disable_current);
+
 #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
 
 void __kasan_unpoison_range(const void *address, size_t size)
_



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality
  2021-05-13  4:44   ` [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality Andrew Morton
@ 2021-05-13  8:54     ` Marco Elver
  2021-05-13  9:32     ` Oliver Glitta
  1 sibling, 0 replies; 11+ messages in thread
From: Marco Elver @ 2021-05-13  8:54 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Oliver Glitta, Brendan Higgins, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Vlastimil Babka, LKML,
	open list:KERNEL SELFTEST FRAMEWORK, KUnit Development,
	Linux Memory Management List, Daniel Latypov

On Thu, 13 May 2021 at 06:44, Andrew Morton <akpm@linux-foundation.org> wrote:
> On Tue, 11 May 2021 17:07:33 +0200 glittao@gmail.com wrote:
> > From: Oliver Glitta <glittao@gmail.com>
> >
> > SLUB has resiliency_test() function which is hidden behind #ifdef
> > SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody
> > runs it. KUnit should be a proper replacement for it.
> >
> > Try changing byte in redzone after allocation and changing
> > pointer to next free node, first byte, 50th byte and redzone
> > byte. Check if validation finds errors.
> >
> > There are several differences from the original resiliency test:
> > Tests create own caches with known state instead of corrupting
> > shared kmalloc caches.
> >
> > The corruption of freepointer uses correct offset, the original
> > resiliency test got broken with freepointer changes.
> >
> > Scratch changing random byte test, because it does not have
> > meaning in this form where we need deterministic results.
> >
> > Add new option CONFIG_SLUB_KUNIT_TEST in Kconfig.
> > Tests next_pointer, first_word and clobber_50th_byte do not run
> > with KASAN option on. Because the test deliberately modifies non-allocated
> > objects.
> >
> > Use kunit_resource to count errors in cache and silence bug reports.
> > Count error whenever slab_bug() or slab_fix() is called or when
> > the count of pages is wrong.
> >
> > ...
> >
> >  lib/slub_kunit.c  | 155 ++++++++++++++++++++++++++++++++++++++++++++++
> >  mm/slab.h         |   1 +
> >  mm/slub.c         |  46 +++++++++++++-
> >  5 files changed, 212 insertions(+), 3 deletions(-)
> >  create mode 100644 lib/slub_kunit.c
> >
> > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> > index 678c13967580..7723f58a9394 100644
> > --- a/lib/Kconfig.debug
> > +++ b/lib/Kconfig.debug
> > @@ -2429,6 +2429,18 @@ config BITS_TEST
> >
> >         If unsure, say N.
> >
> > +config SLUB_KUNIT_TEST
> > +     tristate "KUnit test for SLUB cache error detection" if !KUNIT_ALL_TESTS
>
> This means it can be compiled as a kernel module.  Did you runtime test the
> code as a module?
>
> ERROR: modpost: "kasan_enable_current" [lib/slub_kunit.ko] undefined!
> ERROR: modpost: "kasan_disable_current" [lib/slub_kunit.ko] undefined!
>
> --- a/mm/kasan/common.c~a
> +++ a/mm/kasan/common.c
> @@ -51,11 +51,14 @@ void kasan_enable_current(void)
>  {
>         current->kasan_depth++;
>  }
> +EXPORT_SYMBOL(kasan_enable_current);
>
>  void kasan_disable_current(void)
>  {
>         current->kasan_depth--;
>  }
> +EXPORT_SYMBOL(kasan_disable_current);
> +
>  #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
>
>  void __kasan_unpoison_range(const void *address, size_t size)
> _

Acked-by: Marco Elver <elver@google.com>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality
  2021-05-13  4:44   ` [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality Andrew Morton
  2021-05-13  8:54     ` Marco Elver
@ 2021-05-13  9:32     ` Oliver Glitta
  1 sibling, 0 replies; 11+ messages in thread
From: Oliver Glitta @ 2021-05-13  9:32 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Brendan Higgins, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Vlastimil Babka, LKML,
	open list:KERNEL SELFTEST FRAMEWORK, KUnit Development,
	Linux Memory Management List, Marco Elver, Daniel Latypov

št 13. 5. 2021 o 6:44 Andrew Morton <akpm@linux-foundation.org> napísal(a):
>
> On Tue, 11 May 2021 17:07:33 +0200 glittao@gmail.com wrote:
>
> > From: Oliver Glitta <glittao@gmail.com>
> >
> > SLUB has resiliency_test() function which is hidden behind #ifdef
> > SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody
> > runs it. KUnit should be a proper replacement for it.
> >
> > Try changing byte in redzone after allocation and changing
> > pointer to next free node, first byte, 50th byte and redzone
> > byte. Check if validation finds errors.
> >
> > There are several differences from the original resiliency test:
> > Tests create own caches with known state instead of corrupting
> > shared kmalloc caches.
> >
> > The corruption of freepointer uses correct offset, the original
> > resiliency test got broken with freepointer changes.
> >
> > Scratch changing random byte test, because it does not have
> > meaning in this form where we need deterministic results.
> >
> > Add new option CONFIG_SLUB_KUNIT_TEST in Kconfig.
> > Tests next_pointer, first_word and clobber_50th_byte do not run
> > with KASAN option on. Because the test deliberately modifies non-allocated
> > objects.
> >
> > Use kunit_resource to count errors in cache and silence bug reports.
> > Count error whenever slab_bug() or slab_fix() is called or when
> > the count of pages is wrong.
> >
> > ...
> >
> >  lib/slub_kunit.c  | 155 ++++++++++++++++++++++++++++++++++++++++++++++
> >  mm/slab.h         |   1 +
> >  mm/slub.c         |  46 +++++++++++++-
> >  5 files changed, 212 insertions(+), 3 deletions(-)
> >  create mode 100644 lib/slub_kunit.c
> >
> > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> > index 678c13967580..7723f58a9394 100644
> > --- a/lib/Kconfig.debug
> > +++ b/lib/Kconfig.debug
> > @@ -2429,6 +2429,18 @@ config BITS_TEST
> >
> >         If unsure, say N.
> >
> > +config SLUB_KUNIT_TEST
> > +     tristate "KUnit test for SLUB cache error detection" if !KUNIT_ALL_TESTS
>
> This means it can be compiled as a kernel module.  Did you runtime test the
> code as a module?
>
We tested this as a module in the previous version, but I forgot to
try it with this new one. So we didn't find this error.
Thank you for your fix.

> ERROR: modpost: "kasan_enable_current" [lib/slub_kunit.ko] undefined!
> ERROR: modpost: "kasan_disable_current" [lib/slub_kunit.ko] undefined!
>
> --- a/mm/kasan/common.c~a
> +++ a/mm/kasan/common.c
> @@ -51,11 +51,14 @@ void kasan_enable_current(void)
>  {
>         current->kasan_depth++;
>  }
> +EXPORT_SYMBOL(kasan_enable_current);
>
>  void kasan_disable_current(void)
>  {
>         current->kasan_depth--;
>  }
> +EXPORT_SYMBOL(kasan_disable_current);
> +
>  #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
>
>  void __kasan_unpoison_range(const void *address, size_t size)
> _
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-05-13  9:32 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-11 15:07 [PATCH v5 1/3] kunit: make test->lock irq safe glittao
2021-05-11 15:07 ` [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality glittao
2021-05-11 15:16   ` Marco Elver
2021-05-12 10:30     ` Vlastimil Babka
2021-05-12 12:24     ` Oliver Glitta
2021-05-12 14:06   ` [PATCH] mm/slub, kunit: add a KUnit test for SLUB debugging functionality-fix glittao
2021-05-13  4:44   ` [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality Andrew Morton
2021-05-13  8:54     ` Marco Elver
2021-05-13  9:32     ` Oliver Glitta
2021-05-11 15:07 ` [PATCH v5 3/3] slub: remove resiliency_test() function glittao
2021-05-12 10:28 ` [PATCH v5 1/3] kunit: make test->lock irq safe Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).