linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] kasan: integrate with init_on_alloc/free
@ 2021-03-06  0:15 Andrey Konovalov
  2021-03-06  0:15 ` [PATCH 1/5] arm64: kasan: allow to init memory when setting tags Andrey Konovalov
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Andrey Konovalov @ 2021-03-06  0:15 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Andrew Morton, Catalin Marinas, Will Deacon, Vincenzo Frascino,
	Dmitry Vyukov, Andrey Ryabinin, Marco Elver, Peter Collingbourne,
	Evgenii Stepanov, Branislav Rankov, Kevin Brodsky, kasan-dev,
	linux-arm-kernel, linux-mm, linux-kernel, Andrey Konovalov

This goes on top of:

[v3] kasan, mm: fix crash with HW_TAGS and DEBUG_PAGEALLOC
[v3] mm, kasan: don't poison boot memory with tag-based modes

This patch series integrates HW_TAGS KASAN with init_on_alloc/free
by initializing memory via the same arm64 instruction that sets memory
tags.

This is expected to improve HW_TAGS KASAN performance when
init_on_alloc/free is enabled. The exact perfomance numbers are unknown
as MTE-enabled hardware doesn't exist yet.

Andrey Konovalov (5):
  arm64: kasan: allow to init memory when setting tags
  kasan: init memory in kasan_(un)poison for HW_TAGS
  kasan, mm: integrate page_alloc init with HW_TAGS
  kasan, mm: integrate slab init_on_alloc with HW_TAGS
  kasan, mm: integrate slab init_on_free with HW_TAGS

 arch/arm64/include/asm/memory.h    |  4 +-
 arch/arm64/include/asm/mte-kasan.h | 20 ++++++---
 include/linux/kasan.h              | 34 ++++++++-------
 lib/test_kasan.c                   |  4 +-
 mm/kasan/common.c                  | 45 +++++++++----------
 mm/kasan/generic.c                 | 12 ++---
 mm/kasan/kasan.h                   | 19 ++++----
 mm/kasan/shadow.c                  | 10 ++---
 mm/kasan/sw_tags.c                 |  2 +-
 mm/mempool.c                       |  4 +-
 mm/page_alloc.c                    | 37 +++++++++++-----
 mm/slab.c                          | 43 ++++++++++--------
 mm/slab.h                          | 17 ++++++--
 mm/slub.c                          | 70 +++++++++++++++---------------
 14 files changed, 182 insertions(+), 139 deletions(-)

-- 
2.30.1.766.gb4fecdf3b7-goog


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/5] arm64: kasan: allow to init memory when setting tags
  2021-03-06  0:15 [PATCH 0/5] kasan: integrate with init_on_alloc/free Andrey Konovalov
@ 2021-03-06  0:15 ` Andrey Konovalov
  2021-03-08 11:22   ` Marco Elver
  2021-03-06  0:15 ` [PATCH 2/5] kasan: init memory in kasan_(un)poison for HW_TAGS Andrey Konovalov
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Andrey Konovalov @ 2021-03-06  0:15 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Andrew Morton, Catalin Marinas, Will Deacon, Vincenzo Frascino,
	Dmitry Vyukov, Andrey Ryabinin, Marco Elver, Peter Collingbourne,
	Evgenii Stepanov, Branislav Rankov, Kevin Brodsky, kasan-dev,
	linux-arm-kernel, linux-mm, linux-kernel, Andrey Konovalov

This change adds an argument to mte_set_mem_tag_range() that allows
to enable memory initialization when settinh the allocation tags.
The implementation uses stzg instruction instead of stg when this
argument indicates to initialize memory.

Combining setting allocation tags with memory initialization will
improve HW_TAGS KASAN performance when init_on_alloc/free is enabled.

This change doesn't integrate memory initialization with KASAN,
this is done is subsequent patches in this series.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 arch/arm64/include/asm/memory.h    |  4 ++--
 arch/arm64/include/asm/mte-kasan.h | 20 ++++++++++++++------
 mm/kasan/kasan.h                   |  9 +++++----
 3 files changed, 21 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index c759faf7a1ff..f1ba48b4347d 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -248,8 +248,8 @@ static inline const void *__tag_set(const void *addr, u8 tag)
 #define arch_init_tags(max_tag)			mte_init_tags(max_tag)
 #define arch_get_random_tag()			mte_get_random_tag()
 #define arch_get_mem_tag(addr)			mte_get_mem_tag(addr)
-#define arch_set_mem_tag_range(addr, size, tag)	\
-			mte_set_mem_tag_range((addr), (size), (tag))
+#define arch_set_mem_tag_range(addr, size, tag, init)	\
+			mte_set_mem_tag_range((addr), (size), (tag), (init))
 #endif /* CONFIG_KASAN_HW_TAGS */
 
 /*
diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
index 7ab500e2ad17..35fe549f7ea4 100644
--- a/arch/arm64/include/asm/mte-kasan.h
+++ b/arch/arm64/include/asm/mte-kasan.h
@@ -53,7 +53,8 @@ static inline u8 mte_get_random_tag(void)
  * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
  * size must be non-zero and MTE_GRANULE_SIZE aligned.
  */
-static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
+static inline void mte_set_mem_tag_range(void *addr, size_t size,
+						u8 tag, bool init)
 {
 	u64 curr, end;
 
@@ -68,10 +69,16 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
 		 * 'asm volatile' is required to prevent the compiler to move
 		 * the statement outside of the loop.
 		 */
-		asm volatile(__MTE_PREAMBLE "stg %0, [%0]"
-			     :
-			     : "r" (curr)
-			     : "memory");
+		if (init)
+			asm volatile(__MTE_PREAMBLE "stzg %0, [%0]"
+				     :
+				     : "r" (curr)
+				     : "memory");
+		else
+			asm volatile(__MTE_PREAMBLE "stg %0, [%0]"
+				     :
+				     : "r" (curr)
+				     : "memory");
 
 		curr += MTE_GRANULE_SIZE;
 	} while (curr != end);
@@ -100,7 +107,8 @@ static inline u8 mte_get_random_tag(void)
 	return 0xFF;
 }
 
-static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
+static inline void mte_set_mem_tag_range(void *addr, size_t size,
+						u8 tag, bool init)
 {
 }
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 8c55634d6edd..7fbb32234414 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -291,7 +291,7 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
 #define arch_get_mem_tag(addr)	(0xFF)
 #endif
 #ifndef arch_set_mem_tag_range
-#define arch_set_mem_tag_range(addr, size, tag) ((void *)(addr))
+#define arch_set_mem_tag_range(addr, size, tag, init) ((void *)(addr))
 #endif
 
 #define hw_enable_tagging()			arch_enable_tagging()
@@ -299,7 +299,8 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
 #define hw_set_tagging_report_once(state)	arch_set_tagging_report_once(state)
 #define hw_get_random_tag()			arch_get_random_tag()
 #define hw_get_mem_tag(addr)			arch_get_mem_tag(addr)
-#define hw_set_mem_tag_range(addr, size, tag)	arch_set_mem_tag_range((addr), (size), (tag))
+#define hw_set_mem_tag_range(addr, size, tag, init) \
+			arch_set_mem_tag_range((addr), (size), (tag), (init))
 
 #else /* CONFIG_KASAN_HW_TAGS */
 
@@ -343,7 +344,7 @@ static inline void kasan_poison(const void *addr, size_t size, u8 value)
 	if (WARN_ON(size & KASAN_GRANULE_MASK))
 		return;
 
-	hw_set_mem_tag_range((void *)addr, size, value);
+	hw_set_mem_tag_range((void *)addr, size, value, false);
 }
 
 static inline void kasan_unpoison(const void *addr, size_t size)
@@ -360,7 +361,7 @@ static inline void kasan_unpoison(const void *addr, size_t size)
 		return;
 	size = round_up(size, KASAN_GRANULE_SIZE);
 
-	hw_set_mem_tag_range((void *)addr, size, tag);
+	hw_set_mem_tag_range((void *)addr, size, tag, false);
 }
 
 static inline bool kasan_byte_accessible(const void *addr)
-- 
2.30.1.766.gb4fecdf3b7-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/5] kasan: init memory in kasan_(un)poison for HW_TAGS
  2021-03-06  0:15 [PATCH 0/5] kasan: integrate with init_on_alloc/free Andrey Konovalov
  2021-03-06  0:15 ` [PATCH 1/5] arm64: kasan: allow to init memory when setting tags Andrey Konovalov
@ 2021-03-06  0:15 ` Andrey Konovalov
  2021-03-08 11:26   ` Marco Elver
  2021-03-06  0:15 ` [PATCH 3/5] kasan, mm: integrate page_alloc init with HW_TAGS Andrey Konovalov
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Andrey Konovalov @ 2021-03-06  0:15 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Andrew Morton, Catalin Marinas, Will Deacon, Vincenzo Frascino,
	Dmitry Vyukov, Andrey Ryabinin, Marco Elver, Peter Collingbourne,
	Evgenii Stepanov, Branislav Rankov, Kevin Brodsky, kasan-dev,
	linux-arm-kernel, linux-mm, linux-kernel, Andrey Konovalov

This change adds an argument to kasan_poison() and kasan_unpoison()
that allows initializing memory along with setting the tags for HW_TAGS.

Combining setting allocation tags with memory initialization will
improve HW_TAGS KASAN performance when init_on_alloc/free is enabled.

This change doesn't integrate memory initialization with KASAN,
this is done is subsequent patches in this series.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/test_kasan.c   |  4 ++--
 mm/kasan/common.c  | 28 ++++++++++++++--------------
 mm/kasan/generic.c | 12 ++++++------
 mm/kasan/kasan.h   | 14 ++++++++------
 mm/kasan/shadow.c  | 10 +++++-----
 mm/kasan/sw_tags.c |  2 +-
 6 files changed, 36 insertions(+), 34 deletions(-)

diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index e5647d147b35..d77c45edc7cd 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -1044,14 +1044,14 @@ static void match_all_mem_tag(struct kunit *test)
 			continue;
 
 		/* Mark the first memory granule with the chosen memory tag. */
-		kasan_poison(ptr, KASAN_GRANULE_SIZE, (u8)tag);
+		kasan_poison(ptr, KASAN_GRANULE_SIZE, (u8)tag, false);
 
 		/* This access must cause a KASAN report. */
 		KUNIT_EXPECT_KASAN_FAIL(test, *ptr = 0);
 	}
 
 	/* Recover the memory tag and free. */
-	kasan_poison(ptr, KASAN_GRANULE_SIZE, get_tag(ptr));
+	kasan_poison(ptr, KASAN_GRANULE_SIZE, get_tag(ptr), false);
 	kfree(ptr);
 }
 
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index b5e08d4cefec..316f7f8cd8e6 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -60,7 +60,7 @@ void kasan_disable_current(void)
 
 void __kasan_unpoison_range(const void *address, size_t size)
 {
-	kasan_unpoison(address, size);
+	kasan_unpoison(address, size, false);
 }
 
 #if CONFIG_KASAN_STACK
@@ -69,7 +69,7 @@ void kasan_unpoison_task_stack(struct task_struct *task)
 {
 	void *base = task_stack_page(task);
 
-	kasan_unpoison(base, THREAD_SIZE);
+	kasan_unpoison(base, THREAD_SIZE, false);
 }
 
 /* Unpoison the stack for the current task beyond a watermark sp value. */
@@ -82,7 +82,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
 	 */
 	void *base = (void *)((unsigned long)watermark & ~(THREAD_SIZE - 1));
 
-	kasan_unpoison(base, watermark - base);
+	kasan_unpoison(base, watermark - base, false);
 }
 #endif /* CONFIG_KASAN_STACK */
 
@@ -108,14 +108,14 @@ void __kasan_alloc_pages(struct page *page, unsigned int order)
 	tag = kasan_random_tag();
 	for (i = 0; i < (1 << order); i++)
 		page_kasan_tag_set(page + i, tag);
-	kasan_unpoison(page_address(page), PAGE_SIZE << order);
+	kasan_unpoison(page_address(page), PAGE_SIZE << order, false);
 }
 
 void __kasan_free_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
 		kasan_poison(page_address(page), PAGE_SIZE << order,
-			     KASAN_FREE_PAGE);
+			     KASAN_FREE_PAGE, false);
 }
 
 /*
@@ -251,18 +251,18 @@ void __kasan_poison_slab(struct page *page)
 	for (i = 0; i < compound_nr(page); i++)
 		page_kasan_tag_reset(page + i);
 	kasan_poison(page_address(page), page_size(page),
-		     KASAN_KMALLOC_REDZONE);
+		     KASAN_KMALLOC_REDZONE, false);
 }
 
 void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
 {
-	kasan_unpoison(object, cache->object_size);
+	kasan_unpoison(object, cache->object_size, false);
 }
 
 void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
 {
 	kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE),
-			KASAN_KMALLOC_REDZONE);
+			KASAN_KMALLOC_REDZONE, false);
 }
 
 /*
@@ -351,7 +351,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache,
 	}
 
 	kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE),
-			KASAN_KMALLOC_FREE);
+			KASAN_KMALLOC_FREE, false);
 
 	if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine))
 		return false;
@@ -407,7 +407,7 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
 	if (unlikely(!PageSlab(page))) {
 		if (____kasan_kfree_large(ptr, ip))
 			return;
-		kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE);
+		kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE, false);
 	} else {
 		____kasan_slab_free(page->slab_cache, ptr, ip, false);
 	}
@@ -453,7 +453,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
 	 * Unpoison the whole object.
 	 * For kmalloc() allocations, kasan_kmalloc() will do precise poisoning.
 	 */
-	kasan_unpoison(tagged_object, cache->object_size);
+	kasan_unpoison(tagged_object, cache->object_size, false);
 
 	/* Save alloc info (if possible) for non-kmalloc() allocations. */
 	if (kasan_stack_collection_enabled())
@@ -496,7 +496,7 @@ static inline void *____kasan_kmalloc(struct kmem_cache *cache,
 	redzone_end = round_up((unsigned long)(object + cache->object_size),
 				KASAN_GRANULE_SIZE);
 	kasan_poison((void *)redzone_start, redzone_end - redzone_start,
-			   KASAN_KMALLOC_REDZONE);
+			   KASAN_KMALLOC_REDZONE, false);
 
 	/*
 	 * Save alloc info (if possible) for kmalloc() allocations.
@@ -546,7 +546,7 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
 				KASAN_GRANULE_SIZE);
 	redzone_end = (unsigned long)ptr + page_size(virt_to_page(ptr));
 	kasan_poison((void *)redzone_start, redzone_end - redzone_start,
-		     KASAN_PAGE_REDZONE);
+		     KASAN_PAGE_REDZONE, false);
 
 	return (void *)ptr;
 }
@@ -563,7 +563,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
 	 * Part of it might already have been unpoisoned, but it's unknown
 	 * how big that part is.
 	 */
-	kasan_unpoison(object, size);
+	kasan_unpoison(object, size, false);
 
 	page = virt_to_head_page(object);
 
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 2e55e0f82f39..53cbf28859b5 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -208,11 +208,11 @@ static void register_global(struct kasan_global *global)
 {
 	size_t aligned_size = round_up(global->size, KASAN_GRANULE_SIZE);
 
-	kasan_unpoison(global->beg, global->size);
+	kasan_unpoison(global->beg, global->size, false);
 
 	kasan_poison(global->beg + aligned_size,
 		     global->size_with_redzone - aligned_size,
-		     KASAN_GLOBAL_REDZONE);
+		     KASAN_GLOBAL_REDZONE, false);
 }
 
 void __asan_register_globals(struct kasan_global *globals, size_t size)
@@ -292,11 +292,11 @@ void __asan_alloca_poison(unsigned long addr, size_t size)
 	WARN_ON(!IS_ALIGNED(addr, KASAN_ALLOCA_REDZONE_SIZE));
 
 	kasan_unpoison((const void *)(addr + rounded_down_size),
-			size - rounded_down_size);
+			size - rounded_down_size, false);
 	kasan_poison(left_redzone, KASAN_ALLOCA_REDZONE_SIZE,
-		     KASAN_ALLOCA_LEFT);
+		     KASAN_ALLOCA_LEFT, false);
 	kasan_poison(right_redzone, padding_size + KASAN_ALLOCA_REDZONE_SIZE,
-		     KASAN_ALLOCA_RIGHT);
+		     KASAN_ALLOCA_RIGHT, false);
 }
 EXPORT_SYMBOL(__asan_alloca_poison);
 
@@ -306,7 +306,7 @@ void __asan_allocas_unpoison(const void *stack_top, const void *stack_bottom)
 	if (unlikely(!stack_top || stack_top > stack_bottom))
 		return;
 
-	kasan_unpoison(stack_top, stack_bottom - stack_top);
+	kasan_unpoison(stack_top, stack_bottom - stack_top, false);
 }
 EXPORT_SYMBOL(__asan_allocas_unpoison);
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 7fbb32234414..823a90d6a0cd 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -331,7 +331,7 @@ static inline u8 kasan_random_tag(void) { return 0; }
 
 #ifdef CONFIG_KASAN_HW_TAGS
 
-static inline void kasan_poison(const void *addr, size_t size, u8 value)
+static inline void kasan_poison(const void *addr, size_t size, u8 value, bool init)
 {
 	addr = kasan_reset_tag(addr);
 
@@ -344,10 +344,10 @@ static inline void kasan_poison(const void *addr, size_t size, u8 value)
 	if (WARN_ON(size & KASAN_GRANULE_MASK))
 		return;
 
-	hw_set_mem_tag_range((void *)addr, size, value, false);
+	hw_set_mem_tag_range((void *)addr, size, value, init);
 }
 
-static inline void kasan_unpoison(const void *addr, size_t size)
+static inline void kasan_unpoison(const void *addr, size_t size, bool init)
 {
 	u8 tag = get_tag(addr);
 
@@ -361,7 +361,7 @@ static inline void kasan_unpoison(const void *addr, size_t size)
 		return;
 	size = round_up(size, KASAN_GRANULE_SIZE);
 
-	hw_set_mem_tag_range((void *)addr, size, tag, false);
+	hw_set_mem_tag_range((void *)addr, size, tag, init);
 }
 
 static inline bool kasan_byte_accessible(const void *addr)
@@ -380,22 +380,24 @@ static inline bool kasan_byte_accessible(const void *addr)
  * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
  * @size - range size, must be aligned to KASAN_GRANULE_SIZE
  * @value - value that's written to metadata for the range
+ * @init - whether to initialize the memory range (only for hardware tag-based)
  *
  * The size gets aligned to KASAN_GRANULE_SIZE before marking the range.
  */
-void kasan_poison(const void *addr, size_t size, u8 value);
+void kasan_poison(const void *addr, size_t size, u8 value, bool init);
 
 /**
  * kasan_unpoison - mark the memory range as accessible
  * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
  * @size - range size, can be unaligned
+ * @init - whether to initialize the memory range (only for hardware tag-based)
  *
  * For the tag-based modes, the @size gets aligned to KASAN_GRANULE_SIZE before
  * marking the range.
  * For the generic mode, the last granule of the memory range gets partially
  * unpoisoned based on the @size.
  */
-void kasan_unpoison(const void *addr, size_t size);
+void kasan_unpoison(const void *addr, size_t size, bool init);
 
 bool kasan_byte_accessible(const void *addr);
 
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 63f43443f5d7..727ad4629173 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -69,7 +69,7 @@ void *memcpy(void *dest, const void *src, size_t len)
 	return __memcpy(dest, src, len);
 }
 
-void kasan_poison(const void *addr, size_t size, u8 value)
+void kasan_poison(const void *addr, size_t size, u8 value, bool init)
 {
 	void *shadow_start, *shadow_end;
 
@@ -106,7 +106,7 @@ void kasan_poison_last_granule(const void *addr, size_t size)
 }
 #endif
 
-void kasan_unpoison(const void *addr, size_t size)
+void kasan_unpoison(const void *addr, size_t size, bool init)
 {
 	u8 tag = get_tag(addr);
 
@@ -129,7 +129,7 @@ void kasan_unpoison(const void *addr, size_t size)
 		return;
 
 	/* Unpoison all granules that cover the object. */
-	kasan_poison(addr, round_up(size, KASAN_GRANULE_SIZE), tag);
+	kasan_poison(addr, round_up(size, KASAN_GRANULE_SIZE), tag, false);
 
 	/* Partially poison the last granule for the generic mode. */
 	if (IS_ENABLED(CONFIG_KASAN_GENERIC))
@@ -344,7 +344,7 @@ void kasan_poison_vmalloc(const void *start, unsigned long size)
 		return;
 
 	size = round_up(size, KASAN_GRANULE_SIZE);
-	kasan_poison(start, size, KASAN_VMALLOC_INVALID);
+	kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
 }
 
 void kasan_unpoison_vmalloc(const void *start, unsigned long size)
@@ -352,7 +352,7 @@ void kasan_unpoison_vmalloc(const void *start, unsigned long size)
 	if (!is_vmalloc_or_module_addr(start))
 		return;
 
-	kasan_unpoison(start, size);
+	kasan_unpoison(start, size, false);
 }
 
 static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index 94c2d33be333..bd0c64d4e4d9 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -159,7 +159,7 @@ EXPORT_SYMBOL(__hwasan_storeN_noabort);
 
 void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size)
 {
-	kasan_poison((void *)addr, size, tag);
+	kasan_poison((void *)addr, size, tag, false);
 }
 EXPORT_SYMBOL(__hwasan_tag_memory);
 
-- 
2.30.1.766.gb4fecdf3b7-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/5] kasan, mm: integrate page_alloc init with HW_TAGS
  2021-03-06  0:15 [PATCH 0/5] kasan: integrate with init_on_alloc/free Andrey Konovalov
  2021-03-06  0:15 ` [PATCH 1/5] arm64: kasan: allow to init memory when setting tags Andrey Konovalov
  2021-03-06  0:15 ` [PATCH 2/5] kasan: init memory in kasan_(un)poison for HW_TAGS Andrey Konovalov
@ 2021-03-06  0:15 ` Andrey Konovalov
  2021-03-08 11:35   ` Marco Elver
  2021-03-06  0:15 ` [PATCH 4/5] kasan, mm: integrate slab init_on_alloc " Andrey Konovalov
  2021-03-06  0:15 ` [PATCH 5/5] kasan, mm: integrate slab init_on_free " Andrey Konovalov
  4 siblings, 1 reply; 14+ messages in thread
From: Andrey Konovalov @ 2021-03-06  0:15 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Andrew Morton, Catalin Marinas, Will Deacon, Vincenzo Frascino,
	Dmitry Vyukov, Andrey Ryabinin, Marco Elver, Peter Collingbourne,
	Evgenii Stepanov, Branislav Rankov, Kevin Brodsky, kasan-dev,
	linux-arm-kernel, linux-mm, linux-kernel, Andrey Konovalov

This change uses the previously added memory initialization feature
of HW_TAGS KASAN routines for page_alloc memory when init_on_alloc/free
is enabled.

With this change, kernel_init_free_pages() is no longer called when
both HW_TAGS KASAN and init_on_alloc/free are enabled. Instead, memory
is initialized in KASAN runtime.

To avoid discrepancies with which memory gets initialized that can be
caused by future changes, both KASAN and kernel_init_free_pages() hooks
are put together and a warning comment is added.

This patch changes the order in which memory initialization and page
poisoning hooks are called. This doesn't lead to any side-effects, as
whenever page poisoning is enabled, memory initialization gets disabled.

Combining setting allocation tags with memory initialization improves
HW_TAGS KASAN performance when init_on_alloc/free is enabled.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/kasan.h | 16 ++++++++--------
 mm/kasan/common.c     |  8 ++++----
 mm/mempool.c          |  4 ++--
 mm/page_alloc.c       | 37 ++++++++++++++++++++++++++-----------
 4 files changed, 40 insertions(+), 25 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 1d89b8175027..4c0f414a893b 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -120,20 +120,20 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size)
 		__kasan_unpoison_range(addr, size);
 }
 
-void __kasan_alloc_pages(struct page *page, unsigned int order);
+void __kasan_alloc_pages(struct page *page, unsigned int order, bool init);
 static __always_inline void kasan_alloc_pages(struct page *page,
-						unsigned int order)
+						unsigned int order, bool init)
 {
 	if (kasan_enabled())
-		__kasan_alloc_pages(page, order);
+		__kasan_alloc_pages(page, order, init);
 }
 
-void __kasan_free_pages(struct page *page, unsigned int order);
+void __kasan_free_pages(struct page *page, unsigned int order, bool init);
 static __always_inline void kasan_free_pages(struct page *page,
-						unsigned int order)
+						unsigned int order, bool init)
 {
 	if (kasan_enabled())
-		__kasan_free_pages(page, order);
+		__kasan_free_pages(page, order, init);
 }
 
 void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
@@ -282,8 +282,8 @@ static inline slab_flags_t kasan_never_merge(void)
 	return 0;
 }
 static inline void kasan_unpoison_range(const void *address, size_t size) {}
-static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
-static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_alloc_pages(struct page *page, unsigned int order, bool init) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order, bool init) {}
 static inline void kasan_cache_create(struct kmem_cache *cache,
 				      unsigned int *size,
 				      slab_flags_t *flags) {}
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 316f7f8cd8e6..6107c795611f 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -97,7 +97,7 @@ slab_flags_t __kasan_never_merge(void)
 	return 0;
 }
 
-void __kasan_alloc_pages(struct page *page, unsigned int order)
+void __kasan_alloc_pages(struct page *page, unsigned int order, bool init)
 {
 	u8 tag;
 	unsigned long i;
@@ -108,14 +108,14 @@ void __kasan_alloc_pages(struct page *page, unsigned int order)
 	tag = kasan_random_tag();
 	for (i = 0; i < (1 << order); i++)
 		page_kasan_tag_set(page + i, tag);
-	kasan_unpoison(page_address(page), PAGE_SIZE << order, false);
+	kasan_unpoison(page_address(page), PAGE_SIZE << order, init);
 }
 
-void __kasan_free_pages(struct page *page, unsigned int order)
+void __kasan_free_pages(struct page *page, unsigned int order, bool init)
 {
 	if (likely(!PageHighMem(page)))
 		kasan_poison(page_address(page), PAGE_SIZE << order,
-			     KASAN_FREE_PAGE, false);
+			     KASAN_FREE_PAGE, init);
 }
 
 /*
diff --git a/mm/mempool.c b/mm/mempool.c
index 79959fac27d7..fe19d290a301 100644
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -106,7 +106,7 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element)
 	if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
 		kasan_slab_free_mempool(element);
 	else if (pool->alloc == mempool_alloc_pages)
-		kasan_free_pages(element, (unsigned long)pool->pool_data);
+		kasan_free_pages(element, (unsigned long)pool->pool_data, false);
 }
 
 static void kasan_unpoison_element(mempool_t *pool, void *element)
@@ -114,7 +114,7 @@ static void kasan_unpoison_element(mempool_t *pool, void *element)
 	if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
 		kasan_unpoison_range(element, __ksize(element));
 	else if (pool->alloc == mempool_alloc_pages)
-		kasan_alloc_pages(element, (unsigned long)pool->pool_data);
+		kasan_alloc_pages(element, (unsigned long)pool->pool_data, false);
 }
 
 static __always_inline void add_element(mempool_t *pool, void *element)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0efb07b5907c..175bdb36d113 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -396,14 +396,14 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages);
  * initialization is done, but this is not likely to happen.
  */
 static inline void kasan_free_nondeferred_pages(struct page *page, int order,
-							fpi_t fpi_flags)
+						bool init, fpi_t fpi_flags)
 {
 	if (static_branch_unlikely(&deferred_pages))
 		return;
 	if (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
 			(fpi_flags & FPI_SKIP_KASAN_POISON))
 		return;
-	kasan_free_pages(page, order);
+	kasan_free_pages(page, order, init);
 }
 
 /* Returns true if the struct page for the pfn is uninitialised */
@@ -455,12 +455,12 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn)
 }
 #else
 static inline void kasan_free_nondeferred_pages(struct page *page, int order,
-							fpi_t fpi_flags)
+						bool init, fpi_t fpi_flags)
 {
 	if (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
 			(fpi_flags & FPI_SKIP_KASAN_POISON))
 		return;
-	kasan_free_pages(page, order);
+	kasan_free_pages(page, order, init);
 }
 
 static inline bool early_page_uninitialised(unsigned long pfn)
@@ -1242,6 +1242,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
 			unsigned int order, bool check_free, fpi_t fpi_flags)
 {
 	int bad = 0;
+	bool init;
 
 	VM_BUG_ON_PAGE(PageTail(page), page);
 
@@ -1299,16 +1300,21 @@ static __always_inline bool free_pages_prepare(struct page *page,
 		debug_check_no_obj_freed(page_address(page),
 					   PAGE_SIZE << order);
 	}
-	if (want_init_on_free())
-		kernel_init_free_pages(page, 1 << order);
 
 	kernel_poison_pages(page, 1 << order);
 
 	/*
+	 * As memory initialization is integrated with hardware tag-based
+	 * KASAN, kasan_free_pages and kernel_init_free_pages must be
+	 * kept together to avoid discrepancies in behavior.
+	 *
 	 * With hardware tag-based KASAN, memory tags must be set before the
 	 * page becomes unavailable via debug_pagealloc or arch_free_page.
 	 */
-	kasan_free_nondeferred_pages(page, order, fpi_flags);
+	init = want_init_on_free();
+	if (init && !IS_ENABLED(CONFIG_KASAN_HW_TAGS))
+		kernel_init_free_pages(page, 1 << order);
+	kasan_free_nondeferred_pages(page, order, init, fpi_flags);
 
 	/*
 	 * arch_free_page() can make the page's contents inaccessible.  s390
@@ -2315,17 +2321,26 @@ static bool check_new_pages(struct page *page, unsigned int order)
 inline void post_alloc_hook(struct page *page, unsigned int order,
 				gfp_t gfp_flags)
 {
+	bool init;
+
 	set_page_private(page, 0);
 	set_page_refcounted(page);
 
 	arch_alloc_page(page, order);
 	debug_pagealloc_map_pages(page, 1 << order);
-	kasan_alloc_pages(page, order);
-	kernel_unpoison_pages(page, 1 << order);
-	set_page_owner(page, order, gfp_flags);
 
-	if (!want_init_on_free() && want_init_on_alloc(gfp_flags))
+	/*
+	 * As memory initialization is integrated with hardware tag-based
+	 * KASAN, kasan_alloc_pages and kernel_init_free_pages must be
+	 * kept together to avoid discrepancies in behavior.
+	 */
+	init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
+	kasan_alloc_pages(page, order, init);
+	if (init && !IS_ENABLED(CONFIG_KASAN_HW_TAGS))
 		kernel_init_free_pages(page, 1 << order);
+
+	kernel_unpoison_pages(page, 1 << order);
+	set_page_owner(page, order, gfp_flags);
 }
 
 static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
-- 
2.30.1.766.gb4fecdf3b7-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/5] kasan, mm: integrate slab init_on_alloc with HW_TAGS
  2021-03-06  0:15 [PATCH 0/5] kasan: integrate with init_on_alloc/free Andrey Konovalov
                   ` (2 preceding siblings ...)
  2021-03-06  0:15 ` [PATCH 3/5] kasan, mm: integrate page_alloc init with HW_TAGS Andrey Konovalov
@ 2021-03-06  0:15 ` Andrey Konovalov
  2021-03-08 11:36   ` Marco Elver
  2021-03-06  0:15 ` [PATCH 5/5] kasan, mm: integrate slab init_on_free " Andrey Konovalov
  4 siblings, 1 reply; 14+ messages in thread
From: Andrey Konovalov @ 2021-03-06  0:15 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Andrew Morton, Catalin Marinas, Will Deacon, Vincenzo Frascino,
	Dmitry Vyukov, Andrey Ryabinin, Marco Elver, Peter Collingbourne,
	Evgenii Stepanov, Branislav Rankov, Kevin Brodsky, kasan-dev,
	linux-arm-kernel, linux-mm, linux-kernel, Andrey Konovalov

This change uses the previously added memory initialization feature
of HW_TAGS KASAN routines for slab memory when init_on_alloc is enabled.

With this change, memory initialization memset() is no longer called
when both HW_TAGS KASAN and init_on_alloc are enabled. Instead, memory
is initialized in KASAN runtime.

The memory initialization memset() is moved into slab_post_alloc_hook()
that currently directly follows the initialization loop. A new argument
is added to slab_post_alloc_hook() that indicates whether to initialize
the memory or not.

To avoid discrepancies with which memory gets initialized that can be
caused by future changes, both KASAN hook and initialization memset()
are put together and a warning comment is added.

Combining setting allocation tags with memory initialization improves
HW_TAGS KASAN performance when init_on_alloc is enabled.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/kasan.h |  8 ++++----
 mm/kasan/common.c     |  4 ++--
 mm/slab.c             | 28 +++++++++++++---------------
 mm/slab.h             | 17 +++++++++++++----
 mm/slub.c             | 27 +++++++++++----------------
 5 files changed, 43 insertions(+), 41 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 4c0f414a893b..bb756f6c73b5 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -216,12 +216,12 @@ static __always_inline void kasan_slab_free_mempool(void *ptr)
 }
 
 void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
-				       void *object, gfp_t flags);
+				       void *object, gfp_t flags, bool init);
 static __always_inline void * __must_check kasan_slab_alloc(
-				struct kmem_cache *s, void *object, gfp_t flags)
+		struct kmem_cache *s, void *object, gfp_t flags, bool init)
 {
 	if (kasan_enabled())
-		return __kasan_slab_alloc(s, object, flags);
+		return __kasan_slab_alloc(s, object, flags, init);
 	return object;
 }
 
@@ -306,7 +306,7 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object)
 static inline void kasan_kfree_large(void *ptr) {}
 static inline void kasan_slab_free_mempool(void *ptr) {}
 static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
-				   gfp_t flags)
+				   gfp_t flags, bool init)
 {
 	return object;
 }
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 6107c795611f..7ea747b18c26 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -428,7 +428,7 @@ static void set_alloc_info(struct kmem_cache *cache, void *object,
 }
 
 void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
-					void *object, gfp_t flags)
+					void *object, gfp_t flags, bool init)
 {
 	u8 tag;
 	void *tagged_object;
@@ -453,7 +453,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
 	 * Unpoison the whole object.
 	 * For kmalloc() allocations, kasan_kmalloc() will do precise poisoning.
 	 */
-	kasan_unpoison(tagged_object, cache->object_size, false);
+	kasan_unpoison(tagged_object, cache->object_size, init);
 
 	/* Save alloc info (if possible) for non-kmalloc() allocations. */
 	if (kasan_stack_collection_enabled())
diff --git a/mm/slab.c b/mm/slab.c
index 51fd424e0d6d..936dd686dec9 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3216,6 +3216,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_
 	void *ptr;
 	int slab_node = numa_mem_id();
 	struct obj_cgroup *objcg = NULL;
+	bool init = false;
 
 	flags &= gfp_allowed_mask;
 	cachep = slab_pre_alloc_hook(cachep, &objcg, 1, flags);
@@ -3254,12 +3255,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_
   out:
 	local_irq_restore(save_flags);
 	ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller);
-
-	if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr)
-		memset(ptr, 0, cachep->object_size);
+	init = slab_want_init_on_alloc(flags, cachep);
 
 out_hooks:
-	slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr);
+	slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init);
 	return ptr;
 }
 
@@ -3301,6 +3300,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned lo
 	unsigned long save_flags;
 	void *objp;
 	struct obj_cgroup *objcg = NULL;
+	bool init = false;
 
 	flags &= gfp_allowed_mask;
 	cachep = slab_pre_alloc_hook(cachep, &objcg, 1, flags);
@@ -3317,12 +3317,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned lo
 	local_irq_restore(save_flags);
 	objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller);
 	prefetchw(objp);
-
-	if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp)
-		memset(objp, 0, cachep->object_size);
+	init = slab_want_init_on_alloc(flags, cachep);
 
 out:
-	slab_post_alloc_hook(cachep, objcg, flags, 1, &objp);
+	slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init);
 	return objp;
 }
 
@@ -3542,18 +3540,18 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
 
 	cache_alloc_debugcheck_after_bulk(s, flags, size, p, _RET_IP_);
 
-	/* Clear memory outside IRQ disabled section */
-	if (unlikely(slab_want_init_on_alloc(flags, s)))
-		for (i = 0; i < size; i++)
-			memset(p[i], 0, s->object_size);
-
-	slab_post_alloc_hook(s, objcg, flags, size, p);
+	/*
+	 * memcg and kmem_cache debug support and memory initialization.
+	 * Done outside of the IRQ disabled section.
+	 */
+	slab_post_alloc_hook(s, objcg, flags, size, p,
+				slab_want_init_on_alloc(flags, s));
 	/* FIXME: Trace call missing. Christoph would like a bulk variant */
 	return size;
 error:
 	local_irq_enable();
 	cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_);
-	slab_post_alloc_hook(s, objcg, flags, i, p);
+	slab_post_alloc_hook(s, objcg, flags, i, p, false);
 	__kmem_cache_free_bulk(s, i, p);
 	return 0;
 }
diff --git a/mm/slab.h b/mm/slab.h
index 076582f58f68..0116a314cd21 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -506,15 +506,24 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
 }
 
 static inline void slab_post_alloc_hook(struct kmem_cache *s,
-					struct obj_cgroup *objcg,
-					gfp_t flags, size_t size, void **p)
+					struct obj_cgroup *objcg, gfp_t flags,
+					size_t size, void **p, bool init)
 {
 	size_t i;
 
 	flags &= gfp_allowed_mask;
+
+	/*
+	 * As memory initialization is integrated with hardware tag-based
+	 * KASAN, kasan_slab_alloc and initialization memset must be
+	 * kept together to avoid discrepancies in behavior.
+	 *
+	 * As p[i] might get tagged, memset and kmemleak hook come after KASAN.
+	 */
 	for (i = 0; i < size; i++) {
-		p[i] = kasan_slab_alloc(s, p[i], flags);
-		/* As p[i] might get tagged, call kmemleak hook after KASAN. */
+		p[i] = kasan_slab_alloc(s, p[i], flags, init);
+		if (p[i] && init && !IS_ENABLED(CONFIG_KASAN_HW_TAGS))
+			memset(p[i], 0, s->object_size);
 		kmemleak_alloc_recursive(p[i], s->object_size, 1,
 					 s->flags, flags);
 	}
diff --git a/mm/slub.c b/mm/slub.c
index e26c274b4657..f53df23760e3 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2822,6 +2822,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
 	struct page *page;
 	unsigned long tid;
 	struct obj_cgroup *objcg = NULL;
+	bool init = false;
 
 	s = slab_pre_alloc_hook(s, &objcg, 1, gfpflags);
 	if (!s)
@@ -2899,12 +2900,10 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
 	}
 
 	maybe_wipe_obj_freeptr(s, object);
-
-	if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object)
-		memset(kasan_reset_tag(object), 0, s->object_size);
+	init = slab_want_init_on_alloc(gfpflags, s);
 
 out:
-	slab_post_alloc_hook(s, objcg, gfpflags, 1, &object);
+	slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init);
 
 	return object;
 }
@@ -3356,20 +3355,16 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
 	c->tid = next_tid(c->tid);
 	local_irq_enable();
 
-	/* Clear memory outside IRQ disabled fastpath loop */
-	if (unlikely(slab_want_init_on_alloc(flags, s))) {
-		int j;
-
-		for (j = 0; j < i; j++)
-			memset(kasan_reset_tag(p[j]), 0, s->object_size);
-	}
-
-	/* memcg and kmem_cache debug support */
-	slab_post_alloc_hook(s, objcg, flags, size, p);
+	/*
+	 * memcg and kmem_cache debug support and memory initialization.
+	 * Done outside of the IRQ disabled fastpath loop.
+	 */
+	slab_post_alloc_hook(s, objcg, flags, size, p,
+				slab_want_init_on_alloc(flags, s));
 	return i;
 error:
 	local_irq_enable();
-	slab_post_alloc_hook(s, objcg, flags, i, p);
+	slab_post_alloc_hook(s, objcg, flags, i, p, false);
 	__kmem_cache_free_bulk(s, i, p);
 	return 0;
 }
@@ -3579,7 +3574,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 	init_tracking(kmem_cache_node, n);
 #endif
-	n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL);
+	n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL, false);
 	page->freelist = get_freepointer(kmem_cache_node, n);
 	page->inuse = 1;
 	page->frozen = 0;
-- 
2.30.1.766.gb4fecdf3b7-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/5] kasan, mm: integrate slab init_on_free with HW_TAGS
  2021-03-06  0:15 [PATCH 0/5] kasan: integrate with init_on_alloc/free Andrey Konovalov
                   ` (3 preceding siblings ...)
  2021-03-06  0:15 ` [PATCH 4/5] kasan, mm: integrate slab init_on_alloc " Andrey Konovalov
@ 2021-03-06  0:15 ` Andrey Konovalov
  2021-03-08 11:45   ` Marco Elver
  4 siblings, 1 reply; 14+ messages in thread
From: Andrey Konovalov @ 2021-03-06  0:15 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Andrew Morton, Catalin Marinas, Will Deacon, Vincenzo Frascino,
	Dmitry Vyukov, Andrey Ryabinin, Marco Elver, Peter Collingbourne,
	Evgenii Stepanov, Branislav Rankov, Kevin Brodsky, kasan-dev,
	linux-arm-kernel, linux-mm, linux-kernel, Andrey Konovalov

This change uses the previously added memory initialization feature
of HW_TAGS KASAN routines for slab memory when init_on_free is enabled.

With this change, memory initialization memset() is no longer called
when both HW_TAGS KASAN and init_on_free are enabled. Instead, memory
is initialized in KASAN runtime.

For SLUB, the memory initialization memset() is moved into
slab_free_hook() that currently directly follows the initialization loop.
A new argument is added to slab_free_hook() that indicates whether to
initialize the memory or not.

To avoid discrepancies with which memory gets initialized that can be
caused by future changes, both KASAN hook and initialization memset()
are put together and a warning comment is added.

Combining setting allocation tags with memory initialization improves
HW_TAGS KASAN performance when init_on_free is enabled.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/kasan.h | 10 ++++++----
 mm/kasan/common.c     | 13 +++++++------
 mm/slab.c             | 15 +++++++++++----
 mm/slub.c             | 43 ++++++++++++++++++++++++-------------------
 4 files changed, 48 insertions(+), 33 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index bb756f6c73b5..1df0f7f0b493 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -193,11 +193,13 @@ static __always_inline void * __must_check kasan_init_slab_obj(
 	return (void *)object;
 }
 
-bool __kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
-static __always_inline bool kasan_slab_free(struct kmem_cache *s, void *object)
+bool __kasan_slab_free(struct kmem_cache *s, void *object,
+			unsigned long ip, bool init);
+static __always_inline bool kasan_slab_free(struct kmem_cache *s,
+						void *object, bool init)
 {
 	if (kasan_enabled())
-		return __kasan_slab_free(s, object, _RET_IP_);
+		return __kasan_slab_free(s, object, _RET_IP_, init);
 	return false;
 }
 
@@ -299,7 +301,7 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
 {
 	return (void *)object;
 }
-static inline bool kasan_slab_free(struct kmem_cache *s, void *object)
+static inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init)
 {
 	return false;
 }
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 7ea747b18c26..623cf94288a2 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -322,8 +322,8 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
 	return (void *)object;
 }
 
-static inline bool ____kasan_slab_free(struct kmem_cache *cache,
-				void *object, unsigned long ip, bool quarantine)
+static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
+				unsigned long ip, bool quarantine, bool init)
 {
 	u8 tag;
 	void *tagged_object;
@@ -351,7 +351,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache,
 	}
 
 	kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE),
-			KASAN_KMALLOC_FREE, false);
+			KASAN_KMALLOC_FREE, init);
 
 	if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine))
 		return false;
@@ -362,9 +362,10 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache,
 	return kasan_quarantine_put(cache, object);
 }
 
-bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
+bool __kasan_slab_free(struct kmem_cache *cache, void *object,
+				unsigned long ip, bool init)
 {
-	return ____kasan_slab_free(cache, object, ip, true);
+	return ____kasan_slab_free(cache, object, ip, true, init);
 }
 
 static inline bool ____kasan_kfree_large(void *ptr, unsigned long ip)
@@ -409,7 +410,7 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
 			return;
 		kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE, false);
 	} else {
-		____kasan_slab_free(page->slab_cache, ptr, ip, false);
+		____kasan_slab_free(page->slab_cache, ptr, ip, false, false);
 	}
 }
 
diff --git a/mm/slab.c b/mm/slab.c
index 936dd686dec9..d12ce9e5c3ed 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3425,17 +3425,24 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac)
 static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp,
 					 unsigned long caller)
 {
+	bool init;
+
 	if (is_kfence_address(objp)) {
 		kmemleak_free_recursive(objp, cachep->flags);
 		__kfence_free(objp);
 		return;
 	}
 
-	if (unlikely(slab_want_init_on_free(cachep)))
+	/*
+	 * As memory initialization is integrated with hardware tag-based
+	 * KASAN, kasan_slab_free and initialization memset must be
+	 * kept together to avoid discrepancies in behavior.
+	 */
+	init = slab_want_init_on_free(cachep);
+	if (init && !IS_ENABLED(CONFIG_KASAN_HW_TAGS))
 		memset(objp, 0, cachep->object_size);
-
-	/* Put the object into the quarantine, don't touch it for now. */
-	if (kasan_slab_free(cachep, objp))
+	/* KASAN might put objp into memory quarantine, delaying its reuse. */
+	if (kasan_slab_free(cachep, objp, init))
 		return;
 
 	/* Use KCSAN to help debug racy use-after-free. */
diff --git a/mm/slub.c b/mm/slub.c
index f53df23760e3..c2755670d6bd 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1532,7 +1532,8 @@ static __always_inline void kfree_hook(void *x)
 	kasan_kfree_large(x);
 }
 
-static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x)
+static __always_inline bool slab_free_hook(struct kmem_cache *s,
+						void *x, bool init)
 {
 	kmemleak_free_recursive(x, s->flags);
 
@@ -1558,8 +1559,25 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x)
 		__kcsan_check_access(x, s->object_size,
 				     KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT);
 
-	/* KASAN might put x into memory quarantine, delaying its reuse */
-	return kasan_slab_free(s, x);
+	/*
+	 * As memory initialization is integrated with hardware tag-based
+	 * KASAN, kasan_slab_free and initialization memset's must be
+	 * kept together to avoid discrepancies in behavior.
+	 *
+	 * The initialization memset's clear the object and the metadata,
+	 * but don't touch the SLAB redzone.
+	 */
+	if (init) {
+		int rsize;
+
+		if (!IS_ENABLED(CONFIG_KASAN_HW_TAGS))
+			memset(kasan_reset_tag(x), 0, s->object_size);
+		rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad : 0;
+		memset((char *)kasan_reset_tag(x) + s->inuse, 0,
+		       s->size - s->inuse - rsize);
+	}
+	/* KASAN might put x into memory quarantine, delaying its reuse. */
+	return kasan_slab_free(s, x, init);
 }
 
 static inline bool slab_free_freelist_hook(struct kmem_cache *s,
@@ -1569,10 +1587,9 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
 	void *object;
 	void *next = *head;
 	void *old_tail = *tail ? *tail : *head;
-	int rsize;
 
 	if (is_kfence_address(next)) {
-		slab_free_hook(s, next);
+		slab_free_hook(s, next, false);
 		return true;
 	}
 
@@ -1584,20 +1601,8 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
 		object = next;
 		next = get_freepointer(s, object);
 
-		if (slab_want_init_on_free(s)) {
-			/*
-			 * Clear the object and the metadata, but don't touch
-			 * the redzone.
-			 */
-			memset(kasan_reset_tag(object), 0, s->object_size);
-			rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad
-							   : 0;
-			memset((char *)kasan_reset_tag(object) + s->inuse, 0,
-			       s->size - s->inuse - rsize);
-
-		}
 		/* If object's reuse doesn't have to be delayed */
-		if (!slab_free_hook(s, object)) {
+		if (!slab_free_hook(s, object, slab_want_init_on_free(s))) {
 			/* Move object to the new freelist */
 			set_freepointer(s, object, *head);
 			*head = object;
@@ -3235,7 +3240,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
 	}
 
 	if (is_kfence_address(object)) {
-		slab_free_hook(df->s, object);
+		slab_free_hook(df->s, object, false);
 		__kfence_free(object);
 		p[size] = NULL; /* mark object processed */
 		return size;
-- 
2.30.1.766.gb4fecdf3b7-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/5] arm64: kasan: allow to init memory when setting tags
  2021-03-06  0:15 ` [PATCH 1/5] arm64: kasan: allow to init memory when setting tags Andrey Konovalov
@ 2021-03-08 11:22   ` Marco Elver
  0 siblings, 0 replies; 14+ messages in thread
From: Marco Elver @ 2021-03-08 11:22 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Alexander Potapenko, Andrew Morton, Catalin Marinas, Will Deacon,
	Vincenzo Frascino, Dmitry Vyukov, Andrey Ryabinin,
	Peter Collingbourne, Evgenii Stepanov, Branislav Rankov,
	Kevin Brodsky, kasan-dev, linux-arm-kernel, linux-mm,
	linux-kernel

On Sat, Mar 06, 2021 at 01:15AM +0100, Andrey Konovalov wrote:
> This change adds an argument to mte_set_mem_tag_range() that allows
> to enable memory initialization when settinh the allocation tags.
> The implementation uses stzg instruction instead of stg when this
> argument indicates to initialize memory.
> 
> Combining setting allocation tags with memory initialization will
> improve HW_TAGS KASAN performance when init_on_alloc/free is enabled.
> 
> This change doesn't integrate memory initialization with KASAN,
> this is done is subsequent patches in this series.
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Acked-by: Marco Elver <elver@google.com>

> ---
>  arch/arm64/include/asm/memory.h    |  4 ++--
>  arch/arm64/include/asm/mte-kasan.h | 20 ++++++++++++++------
>  mm/kasan/kasan.h                   |  9 +++++----
>  3 files changed, 21 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index c759faf7a1ff..f1ba48b4347d 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -248,8 +248,8 @@ static inline const void *__tag_set(const void *addr, u8 tag)
>  #define arch_init_tags(max_tag)			mte_init_tags(max_tag)
>  #define arch_get_random_tag()			mte_get_random_tag()
>  #define arch_get_mem_tag(addr)			mte_get_mem_tag(addr)
> -#define arch_set_mem_tag_range(addr, size, tag)	\
> -			mte_set_mem_tag_range((addr), (size), (tag))
> +#define arch_set_mem_tag_range(addr, size, tag, init)	\
> +			mte_set_mem_tag_range((addr), (size), (tag), (init))
>  #endif /* CONFIG_KASAN_HW_TAGS */
>  
>  /*
> diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
> index 7ab500e2ad17..35fe549f7ea4 100644
> --- a/arch/arm64/include/asm/mte-kasan.h
> +++ b/arch/arm64/include/asm/mte-kasan.h
> @@ -53,7 +53,8 @@ static inline u8 mte_get_random_tag(void)
>   * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
>   * size must be non-zero and MTE_GRANULE_SIZE aligned.
>   */
> -static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
> +static inline void mte_set_mem_tag_range(void *addr, size_t size,
> +						u8 tag, bool init)
>  {
>  	u64 curr, end;
>  
> @@ -68,10 +69,16 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
>  		 * 'asm volatile' is required to prevent the compiler to move
>  		 * the statement outside of the loop.
>  		 */
> -		asm volatile(__MTE_PREAMBLE "stg %0, [%0]"
> -			     :
> -			     : "r" (curr)
> -			     : "memory");
> +		if (init)
> +			asm volatile(__MTE_PREAMBLE "stzg %0, [%0]"
> +				     :
> +				     : "r" (curr)
> +				     : "memory");
> +		else
> +			asm volatile(__MTE_PREAMBLE "stg %0, [%0]"
> +				     :
> +				     : "r" (curr)
> +				     : "memory");
>  
>  		curr += MTE_GRANULE_SIZE;
>  	} while (curr != end);
> @@ -100,7 +107,8 @@ static inline u8 mte_get_random_tag(void)
>  	return 0xFF;
>  }
>  
> -static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
> +static inline void mte_set_mem_tag_range(void *addr, size_t size,
> +						u8 tag, bool init)
>  {
>  }
>  
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 8c55634d6edd..7fbb32234414 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -291,7 +291,7 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
>  #define arch_get_mem_tag(addr)	(0xFF)
>  #endif
>  #ifndef arch_set_mem_tag_range
> -#define arch_set_mem_tag_range(addr, size, tag) ((void *)(addr))
> +#define arch_set_mem_tag_range(addr, size, tag, init) ((void *)(addr))
>  #endif
>  
>  #define hw_enable_tagging()			arch_enable_tagging()
> @@ -299,7 +299,8 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
>  #define hw_set_tagging_report_once(state)	arch_set_tagging_report_once(state)
>  #define hw_get_random_tag()			arch_get_random_tag()
>  #define hw_get_mem_tag(addr)			arch_get_mem_tag(addr)
> -#define hw_set_mem_tag_range(addr, size, tag)	arch_set_mem_tag_range((addr), (size), (tag))
> +#define hw_set_mem_tag_range(addr, size, tag, init) \
> +			arch_set_mem_tag_range((addr), (size), (tag), (init))
>  
>  #else /* CONFIG_KASAN_HW_TAGS */
>  
> @@ -343,7 +344,7 @@ static inline void kasan_poison(const void *addr, size_t size, u8 value)
>  	if (WARN_ON(size & KASAN_GRANULE_MASK))
>  		return;
>  
> -	hw_set_mem_tag_range((void *)addr, size, value);
> +	hw_set_mem_tag_range((void *)addr, size, value, false);
>  }
>  
>  static inline void kasan_unpoison(const void *addr, size_t size)
> @@ -360,7 +361,7 @@ static inline void kasan_unpoison(const void *addr, size_t size)
>  		return;
>  	size = round_up(size, KASAN_GRANULE_SIZE);
>  
> -	hw_set_mem_tag_range((void *)addr, size, tag);
> +	hw_set_mem_tag_range((void *)addr, size, tag, false);
>  }
>  
>  static inline bool kasan_byte_accessible(const void *addr)
> -- 
> 2.30.1.766.gb4fecdf3b7-goog
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/5] kasan: init memory in kasan_(un)poison for HW_TAGS
  2021-03-06  0:15 ` [PATCH 2/5] kasan: init memory in kasan_(un)poison for HW_TAGS Andrey Konovalov
@ 2021-03-08 11:26   ` Marco Elver
  0 siblings, 0 replies; 14+ messages in thread
From: Marco Elver @ 2021-03-08 11:26 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Alexander Potapenko, Andrew Morton, Catalin Marinas, Will Deacon,
	Vincenzo Frascino, Dmitry Vyukov, Andrey Ryabinin,
	Peter Collingbourne, Evgenii Stepanov, Branislav Rankov,
	Kevin Brodsky, kasan-dev, linux-arm-kernel, linux-mm,
	linux-kernel

On Sat, Mar 06, 2021 at 01:15AM +0100, Andrey Konovalov wrote:
> This change adds an argument to kasan_poison() and kasan_unpoison()
> that allows initializing memory along with setting the tags for HW_TAGS.
> 
> Combining setting allocation tags with memory initialization will
> improve HW_TAGS KASAN performance when init_on_alloc/free is enabled.
> 
> This change doesn't integrate memory initialization with KASAN,
> this is done is subsequent patches in this series.
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Reviewed-by: Marco Elver <elver@google.com>

> ---
>  lib/test_kasan.c   |  4 ++--
>  mm/kasan/common.c  | 28 ++++++++++++++--------------
>  mm/kasan/generic.c | 12 ++++++------
>  mm/kasan/kasan.h   | 14 ++++++++------
>  mm/kasan/shadow.c  | 10 +++++-----
>  mm/kasan/sw_tags.c |  2 +-
>  6 files changed, 36 insertions(+), 34 deletions(-)
> 
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> index e5647d147b35..d77c45edc7cd 100644
> --- a/lib/test_kasan.c
> +++ b/lib/test_kasan.c
> @@ -1044,14 +1044,14 @@ static void match_all_mem_tag(struct kunit *test)
>  			continue;
>  
>  		/* Mark the first memory granule with the chosen memory tag. */
> -		kasan_poison(ptr, KASAN_GRANULE_SIZE, (u8)tag);
> +		kasan_poison(ptr, KASAN_GRANULE_SIZE, (u8)tag, false);
>  
>  		/* This access must cause a KASAN report. */
>  		KUNIT_EXPECT_KASAN_FAIL(test, *ptr = 0);
>  	}
>  
>  	/* Recover the memory tag and free. */
> -	kasan_poison(ptr, KASAN_GRANULE_SIZE, get_tag(ptr));
> +	kasan_poison(ptr, KASAN_GRANULE_SIZE, get_tag(ptr), false);
>  	kfree(ptr);
>  }
>  
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index b5e08d4cefec..316f7f8cd8e6 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -60,7 +60,7 @@ void kasan_disable_current(void)
>  
>  void __kasan_unpoison_range(const void *address, size_t size)
>  {
> -	kasan_unpoison(address, size);
> +	kasan_unpoison(address, size, false);
>  }
>  
>  #if CONFIG_KASAN_STACK
> @@ -69,7 +69,7 @@ void kasan_unpoison_task_stack(struct task_struct *task)
>  {
>  	void *base = task_stack_page(task);
>  
> -	kasan_unpoison(base, THREAD_SIZE);
> +	kasan_unpoison(base, THREAD_SIZE, false);
>  }
>  
>  /* Unpoison the stack for the current task beyond a watermark sp value. */
> @@ -82,7 +82,7 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
>  	 */
>  	void *base = (void *)((unsigned long)watermark & ~(THREAD_SIZE - 1));
>  
> -	kasan_unpoison(base, watermark - base);
> +	kasan_unpoison(base, watermark - base, false);
>  }
>  #endif /* CONFIG_KASAN_STACK */
>  
> @@ -108,14 +108,14 @@ void __kasan_alloc_pages(struct page *page, unsigned int order)
>  	tag = kasan_random_tag();
>  	for (i = 0; i < (1 << order); i++)
>  		page_kasan_tag_set(page + i, tag);
> -	kasan_unpoison(page_address(page), PAGE_SIZE << order);
> +	kasan_unpoison(page_address(page), PAGE_SIZE << order, false);
>  }
>  
>  void __kasan_free_pages(struct page *page, unsigned int order)
>  {
>  	if (likely(!PageHighMem(page)))
>  		kasan_poison(page_address(page), PAGE_SIZE << order,
> -			     KASAN_FREE_PAGE);
> +			     KASAN_FREE_PAGE, false);
>  }
>  
>  /*
> @@ -251,18 +251,18 @@ void __kasan_poison_slab(struct page *page)
>  	for (i = 0; i < compound_nr(page); i++)
>  		page_kasan_tag_reset(page + i);
>  	kasan_poison(page_address(page), page_size(page),
> -		     KASAN_KMALLOC_REDZONE);
> +		     KASAN_KMALLOC_REDZONE, false);
>  }
>  
>  void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
>  {
> -	kasan_unpoison(object, cache->object_size);
> +	kasan_unpoison(object, cache->object_size, false);
>  }
>  
>  void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
>  {
>  	kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE),
> -			KASAN_KMALLOC_REDZONE);
> +			KASAN_KMALLOC_REDZONE, false);
>  }
>  
>  /*
> @@ -351,7 +351,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache,
>  	}
>  
>  	kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE),
> -			KASAN_KMALLOC_FREE);
> +			KASAN_KMALLOC_FREE, false);
>  
>  	if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine))
>  		return false;
> @@ -407,7 +407,7 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
>  	if (unlikely(!PageSlab(page))) {
>  		if (____kasan_kfree_large(ptr, ip))
>  			return;
> -		kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE);
> +		kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE, false);
>  	} else {
>  		____kasan_slab_free(page->slab_cache, ptr, ip, false);
>  	}
> @@ -453,7 +453,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
>  	 * Unpoison the whole object.
>  	 * For kmalloc() allocations, kasan_kmalloc() will do precise poisoning.
>  	 */
> -	kasan_unpoison(tagged_object, cache->object_size);
> +	kasan_unpoison(tagged_object, cache->object_size, false);
>  
>  	/* Save alloc info (if possible) for non-kmalloc() allocations. */
>  	if (kasan_stack_collection_enabled())
> @@ -496,7 +496,7 @@ static inline void *____kasan_kmalloc(struct kmem_cache *cache,
>  	redzone_end = round_up((unsigned long)(object + cache->object_size),
>  				KASAN_GRANULE_SIZE);
>  	kasan_poison((void *)redzone_start, redzone_end - redzone_start,
> -			   KASAN_KMALLOC_REDZONE);
> +			   KASAN_KMALLOC_REDZONE, false);
>  
>  	/*
>  	 * Save alloc info (if possible) for kmalloc() allocations.
> @@ -546,7 +546,7 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
>  				KASAN_GRANULE_SIZE);
>  	redzone_end = (unsigned long)ptr + page_size(virt_to_page(ptr));
>  	kasan_poison((void *)redzone_start, redzone_end - redzone_start,
> -		     KASAN_PAGE_REDZONE);
> +		     KASAN_PAGE_REDZONE, false);
>  
>  	return (void *)ptr;
>  }
> @@ -563,7 +563,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
>  	 * Part of it might already have been unpoisoned, but it's unknown
>  	 * how big that part is.
>  	 */
> -	kasan_unpoison(object, size);
> +	kasan_unpoison(object, size, false);
>  
>  	page = virt_to_head_page(object);
>  
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index 2e55e0f82f39..53cbf28859b5 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -208,11 +208,11 @@ static void register_global(struct kasan_global *global)
>  {
>  	size_t aligned_size = round_up(global->size, KASAN_GRANULE_SIZE);
>  
> -	kasan_unpoison(global->beg, global->size);
> +	kasan_unpoison(global->beg, global->size, false);
>  
>  	kasan_poison(global->beg + aligned_size,
>  		     global->size_with_redzone - aligned_size,
> -		     KASAN_GLOBAL_REDZONE);
> +		     KASAN_GLOBAL_REDZONE, false);
>  }
>  
>  void __asan_register_globals(struct kasan_global *globals, size_t size)
> @@ -292,11 +292,11 @@ void __asan_alloca_poison(unsigned long addr, size_t size)
>  	WARN_ON(!IS_ALIGNED(addr, KASAN_ALLOCA_REDZONE_SIZE));
>  
>  	kasan_unpoison((const void *)(addr + rounded_down_size),
> -			size - rounded_down_size);
> +			size - rounded_down_size, false);
>  	kasan_poison(left_redzone, KASAN_ALLOCA_REDZONE_SIZE,
> -		     KASAN_ALLOCA_LEFT);
> +		     KASAN_ALLOCA_LEFT, false);
>  	kasan_poison(right_redzone, padding_size + KASAN_ALLOCA_REDZONE_SIZE,
> -		     KASAN_ALLOCA_RIGHT);
> +		     KASAN_ALLOCA_RIGHT, false);
>  }
>  EXPORT_SYMBOL(__asan_alloca_poison);
>  
> @@ -306,7 +306,7 @@ void __asan_allocas_unpoison(const void *stack_top, const void *stack_bottom)
>  	if (unlikely(!stack_top || stack_top > stack_bottom))
>  		return;
>  
> -	kasan_unpoison(stack_top, stack_bottom - stack_top);
> +	kasan_unpoison(stack_top, stack_bottom - stack_top, false);
>  }
>  EXPORT_SYMBOL(__asan_allocas_unpoison);
>  
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 7fbb32234414..823a90d6a0cd 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -331,7 +331,7 @@ static inline u8 kasan_random_tag(void) { return 0; }
>  
>  #ifdef CONFIG_KASAN_HW_TAGS
>  
> -static inline void kasan_poison(const void *addr, size_t size, u8 value)
> +static inline void kasan_poison(const void *addr, size_t size, u8 value, bool init)
>  {
>  	addr = kasan_reset_tag(addr);
>  
> @@ -344,10 +344,10 @@ static inline void kasan_poison(const void *addr, size_t size, u8 value)
>  	if (WARN_ON(size & KASAN_GRANULE_MASK))
>  		return;
>  
> -	hw_set_mem_tag_range((void *)addr, size, value, false);
> +	hw_set_mem_tag_range((void *)addr, size, value, init);
>  }
>  
> -static inline void kasan_unpoison(const void *addr, size_t size)
> +static inline void kasan_unpoison(const void *addr, size_t size, bool init)
>  {
>  	u8 tag = get_tag(addr);
>  
> @@ -361,7 +361,7 @@ static inline void kasan_unpoison(const void *addr, size_t size)
>  		return;
>  	size = round_up(size, KASAN_GRANULE_SIZE);
>  
> -	hw_set_mem_tag_range((void *)addr, size, tag, false);
> +	hw_set_mem_tag_range((void *)addr, size, tag, init);
>  }
>  
>  static inline bool kasan_byte_accessible(const void *addr)
> @@ -380,22 +380,24 @@ static inline bool kasan_byte_accessible(const void *addr)
>   * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
>   * @size - range size, must be aligned to KASAN_GRANULE_SIZE
>   * @value - value that's written to metadata for the range
> + * @init - whether to initialize the memory range (only for hardware tag-based)
>   *
>   * The size gets aligned to KASAN_GRANULE_SIZE before marking the range.
>   */
> -void kasan_poison(const void *addr, size_t size, u8 value);
> +void kasan_poison(const void *addr, size_t size, u8 value, bool init);
>  
>  /**
>   * kasan_unpoison - mark the memory range as accessible
>   * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
>   * @size - range size, can be unaligned
> + * @init - whether to initialize the memory range (only for hardware tag-based)
>   *
>   * For the tag-based modes, the @size gets aligned to KASAN_GRANULE_SIZE before
>   * marking the range.
>   * For the generic mode, the last granule of the memory range gets partially
>   * unpoisoned based on the @size.
>   */
> -void kasan_unpoison(const void *addr, size_t size);
> +void kasan_unpoison(const void *addr, size_t size, bool init);
>  
>  bool kasan_byte_accessible(const void *addr);
>  
> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
> index 63f43443f5d7..727ad4629173 100644
> --- a/mm/kasan/shadow.c
> +++ b/mm/kasan/shadow.c
> @@ -69,7 +69,7 @@ void *memcpy(void *dest, const void *src, size_t len)
>  	return __memcpy(dest, src, len);
>  }
>  
> -void kasan_poison(const void *addr, size_t size, u8 value)
> +void kasan_poison(const void *addr, size_t size, u8 value, bool init)
>  {
>  	void *shadow_start, *shadow_end;
>  
> @@ -106,7 +106,7 @@ void kasan_poison_last_granule(const void *addr, size_t size)
>  }
>  #endif
>  
> -void kasan_unpoison(const void *addr, size_t size)
> +void kasan_unpoison(const void *addr, size_t size, bool init)
>  {
>  	u8 tag = get_tag(addr);
>  
> @@ -129,7 +129,7 @@ void kasan_unpoison(const void *addr, size_t size)
>  		return;
>  
>  	/* Unpoison all granules that cover the object. */
> -	kasan_poison(addr, round_up(size, KASAN_GRANULE_SIZE), tag);
> +	kasan_poison(addr, round_up(size, KASAN_GRANULE_SIZE), tag, false);
>  
>  	/* Partially poison the last granule for the generic mode. */
>  	if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> @@ -344,7 +344,7 @@ void kasan_poison_vmalloc(const void *start, unsigned long size)
>  		return;
>  
>  	size = round_up(size, KASAN_GRANULE_SIZE);
> -	kasan_poison(start, size, KASAN_VMALLOC_INVALID);
> +	kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
>  }
>  
>  void kasan_unpoison_vmalloc(const void *start, unsigned long size)
> @@ -352,7 +352,7 @@ void kasan_unpoison_vmalloc(const void *start, unsigned long size)
>  	if (!is_vmalloc_or_module_addr(start))
>  		return;
>  
> -	kasan_unpoison(start, size);
> +	kasan_unpoison(start, size, false);
>  }
>  
>  static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
> diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
> index 94c2d33be333..bd0c64d4e4d9 100644
> --- a/mm/kasan/sw_tags.c
> +++ b/mm/kasan/sw_tags.c
> @@ -159,7 +159,7 @@ EXPORT_SYMBOL(__hwasan_storeN_noabort);
>  
>  void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size)
>  {
> -	kasan_poison((void *)addr, size, tag);
> +	kasan_poison((void *)addr, size, tag, false);
>  }
>  EXPORT_SYMBOL(__hwasan_tag_memory);
>  
> -- 
> 2.30.1.766.gb4fecdf3b7-goog
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/5] kasan, mm: integrate page_alloc init with HW_TAGS
  2021-03-06  0:15 ` [PATCH 3/5] kasan, mm: integrate page_alloc init with HW_TAGS Andrey Konovalov
@ 2021-03-08 11:35   ` Marco Elver
  2021-03-08 11:50     ` Marco Elver
  2021-03-08 14:14     ` Andrey Konovalov
  0 siblings, 2 replies; 14+ messages in thread
From: Marco Elver @ 2021-03-08 11:35 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Alexander Potapenko, Andrew Morton, Catalin Marinas, Will Deacon,
	Vincenzo Frascino, Dmitry Vyukov, Andrey Ryabinin,
	Peter Collingbourne, Evgenii Stepanov, Branislav Rankov,
	Kevin Brodsky, kasan-dev, linux-arm-kernel, linux-mm,
	linux-kernel

On Sat, Mar 06, 2021 at 01:15AM +0100, Andrey Konovalov wrote:
> This change uses the previously added memory initialization feature
> of HW_TAGS KASAN routines for page_alloc memory when init_on_alloc/free
> is enabled.
> 
> With this change, kernel_init_free_pages() is no longer called when
> both HW_TAGS KASAN and init_on_alloc/free are enabled. Instead, memory
> is initialized in KASAN runtime.
> 
> To avoid discrepancies with which memory gets initialized that can be
> caused by future changes, both KASAN and kernel_init_free_pages() hooks
> are put together and a warning comment is added.
> 
> This patch changes the order in which memory initialization and page
> poisoning hooks are called. This doesn't lead to any side-effects, as
> whenever page poisoning is enabled, memory initialization gets disabled.
> 
> Combining setting allocation tags with memory initialization improves
> HW_TAGS KASAN performance when init_on_alloc/free is enabled.
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  include/linux/kasan.h | 16 ++++++++--------
>  mm/kasan/common.c     |  8 ++++----
>  mm/mempool.c          |  4 ++--
>  mm/page_alloc.c       | 37 ++++++++++++++++++++++++++-----------
>  4 files changed, 40 insertions(+), 25 deletions(-)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 1d89b8175027..4c0f414a893b 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -120,20 +120,20 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size)
>  		__kasan_unpoison_range(addr, size);
>  }
>  
> -void __kasan_alloc_pages(struct page *page, unsigned int order);
> +void __kasan_alloc_pages(struct page *page, unsigned int order, bool init);
>  static __always_inline void kasan_alloc_pages(struct page *page,
> -						unsigned int order)
> +						unsigned int order, bool init)
>  {
>  	if (kasan_enabled())
> -		__kasan_alloc_pages(page, order);
> +		__kasan_alloc_pages(page, order, init);
>  }
>  
> -void __kasan_free_pages(struct page *page, unsigned int order);
> +void __kasan_free_pages(struct page *page, unsigned int order, bool init);
>  static __always_inline void kasan_free_pages(struct page *page,
> -						unsigned int order)
> +						unsigned int order, bool init)
>  {
>  	if (kasan_enabled())
> -		__kasan_free_pages(page, order);
> +		__kasan_free_pages(page, order, init);
>  }
>  
>  void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> @@ -282,8 +282,8 @@ static inline slab_flags_t kasan_never_merge(void)
>  	return 0;
>  }
>  static inline void kasan_unpoison_range(const void *address, size_t size) {}
> -static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> -static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> +static inline void kasan_alloc_pages(struct page *page, unsigned int order, bool init) {}
> +static inline void kasan_free_pages(struct page *page, unsigned int order, bool init) {}
>  static inline void kasan_cache_create(struct kmem_cache *cache,
>  				      unsigned int *size,
>  				      slab_flags_t *flags) {}
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 316f7f8cd8e6..6107c795611f 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -97,7 +97,7 @@ slab_flags_t __kasan_never_merge(void)
>  	return 0;
>  }
>  
> -void __kasan_alloc_pages(struct page *page, unsigned int order)
> +void __kasan_alloc_pages(struct page *page, unsigned int order, bool init)
>  {
>  	u8 tag;
>  	unsigned long i;
> @@ -108,14 +108,14 @@ void __kasan_alloc_pages(struct page *page, unsigned int order)
>  	tag = kasan_random_tag();
>  	for (i = 0; i < (1 << order); i++)
>  		page_kasan_tag_set(page + i, tag);
> -	kasan_unpoison(page_address(page), PAGE_SIZE << order, false);
> +	kasan_unpoison(page_address(page), PAGE_SIZE << order, init);
>  }
>  
> -void __kasan_free_pages(struct page *page, unsigned int order)
> +void __kasan_free_pages(struct page *page, unsigned int order, bool init)
>  {
>  	if (likely(!PageHighMem(page)))
>  		kasan_poison(page_address(page), PAGE_SIZE << order,
> -			     KASAN_FREE_PAGE, false);
> +			     KASAN_FREE_PAGE, init);
>  }
>  
>  /*
> diff --git a/mm/mempool.c b/mm/mempool.c
> index 79959fac27d7..fe19d290a301 100644
> --- a/mm/mempool.c
> +++ b/mm/mempool.c
> @@ -106,7 +106,7 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element)
>  	if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
>  		kasan_slab_free_mempool(element);
>  	else if (pool->alloc == mempool_alloc_pages)
> -		kasan_free_pages(element, (unsigned long)pool->pool_data);
> +		kasan_free_pages(element, (unsigned long)pool->pool_data, false);
>  }
>  
>  static void kasan_unpoison_element(mempool_t *pool, void *element)
> @@ -114,7 +114,7 @@ static void kasan_unpoison_element(mempool_t *pool, void *element)
>  	if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
>  		kasan_unpoison_range(element, __ksize(element));
>  	else if (pool->alloc == mempool_alloc_pages)
> -		kasan_alloc_pages(element, (unsigned long)pool->pool_data);
> +		kasan_alloc_pages(element, (unsigned long)pool->pool_data, false);
>  }
>  
>  static __always_inline void add_element(mempool_t *pool, void *element)
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 0efb07b5907c..175bdb36d113 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -396,14 +396,14 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages);
>   * initialization is done, but this is not likely to happen.
>   */
>  static inline void kasan_free_nondeferred_pages(struct page *page, int order,
> -							fpi_t fpi_flags)
> +						bool init, fpi_t fpi_flags)
>  {
>  	if (static_branch_unlikely(&deferred_pages))
>  		return;
>  	if (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
>  			(fpi_flags & FPI_SKIP_KASAN_POISON))
>  		return;
> -	kasan_free_pages(page, order);
> +	kasan_free_pages(page, order, init);
>  }
>  
>  /* Returns true if the struct page for the pfn is uninitialised */
> @@ -455,12 +455,12 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn)
>  }
>  #else
>  static inline void kasan_free_nondeferred_pages(struct page *page, int order,
> -							fpi_t fpi_flags)
> +						bool init, fpi_t fpi_flags)
>  {
>  	if (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
>  			(fpi_flags & FPI_SKIP_KASAN_POISON))
>  		return;
> -	kasan_free_pages(page, order);
> +	kasan_free_pages(page, order, init);
>  }
>  
>  static inline bool early_page_uninitialised(unsigned long pfn)
> @@ -1242,6 +1242,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
>  			unsigned int order, bool check_free, fpi_t fpi_flags)
>  {
>  	int bad = 0;
> +	bool init;
>  
>  	VM_BUG_ON_PAGE(PageTail(page), page);
>  
> @@ -1299,16 +1300,21 @@ static __always_inline bool free_pages_prepare(struct page *page,
>  		debug_check_no_obj_freed(page_address(page),
>  					   PAGE_SIZE << order);
>  	}
> -	if (want_init_on_free())
> -		kernel_init_free_pages(page, 1 << order);
>  
>  	kernel_poison_pages(page, 1 << order);
>  
>  	/*
> +	 * As memory initialization is integrated with hardware tag-based
> +	 * KASAN, kasan_free_pages and kernel_init_free_pages must be
> +	 * kept together to avoid discrepancies in behavior.
> +	 *
>  	 * With hardware tag-based KASAN, memory tags must be set before the
>  	 * page becomes unavailable via debug_pagealloc or arch_free_page.
>  	 */
> -	kasan_free_nondeferred_pages(page, order, fpi_flags);
> +	init = want_init_on_free();
> +	if (init && !IS_ENABLED(CONFIG_KASAN_HW_TAGS))

Doing the !IS_ENABLED(CONFIG_KASAN_HW_TAGS) check is awkward, and
assumes internal knowledge of the KASAN implementation and how all
current and future architectures that support HW_TAGS work.

Could we instead add a static inline helper to <linux/kasan.h>, e.g.
kasan_supports_init() or so?

That way, these checks won't grow uncontrollable if a future
architecture implements HW_TAGS but not init.

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/5] kasan, mm: integrate slab init_on_alloc with HW_TAGS
  2021-03-06  0:15 ` [PATCH 4/5] kasan, mm: integrate slab init_on_alloc " Andrey Konovalov
@ 2021-03-08 11:36   ` Marco Elver
  0 siblings, 0 replies; 14+ messages in thread
From: Marco Elver @ 2021-03-08 11:36 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Alexander Potapenko, Andrew Morton, Catalin Marinas, Will Deacon,
	Vincenzo Frascino, Dmitry Vyukov, Andrey Ryabinin,
	Peter Collingbourne, Evgenii Stepanov, Branislav Rankov,
	Kevin Brodsky, kasan-dev, linux-arm-kernel, linux-mm,
	linux-kernel

On Sat, Mar 06, 2021 at 01:15AM +0100, Andrey Konovalov wrote:
> This change uses the previously added memory initialization feature
> of HW_TAGS KASAN routines for slab memory when init_on_alloc is enabled.
> 
> With this change, memory initialization memset() is no longer called
> when both HW_TAGS KASAN and init_on_alloc are enabled. Instead, memory
> is initialized in KASAN runtime.
> 
> The memory initialization memset() is moved into slab_post_alloc_hook()
> that currently directly follows the initialization loop. A new argument
> is added to slab_post_alloc_hook() that indicates whether to initialize
> the memory or not.
> 
> To avoid discrepancies with which memory gets initialized that can be
> caused by future changes, both KASAN hook and initialization memset()
> are put together and a warning comment is added.
> 
> Combining setting allocation tags with memory initialization improves
> HW_TAGS KASAN performance when init_on_alloc is enabled.
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  include/linux/kasan.h |  8 ++++----
>  mm/kasan/common.c     |  4 ++--
>  mm/slab.c             | 28 +++++++++++++---------------
>  mm/slab.h             | 17 +++++++++++++----
>  mm/slub.c             | 27 +++++++++++----------------
>  5 files changed, 43 insertions(+), 41 deletions(-)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 4c0f414a893b..bb756f6c73b5 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -216,12 +216,12 @@ static __always_inline void kasan_slab_free_mempool(void *ptr)
>  }
>  
>  void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
> -				       void *object, gfp_t flags);
> +				       void *object, gfp_t flags, bool init);
>  static __always_inline void * __must_check kasan_slab_alloc(
> -				struct kmem_cache *s, void *object, gfp_t flags)
> +		struct kmem_cache *s, void *object, gfp_t flags, bool init)
>  {
>  	if (kasan_enabled())
> -		return __kasan_slab_alloc(s, object, flags);
> +		return __kasan_slab_alloc(s, object, flags, init);
>  	return object;
>  }
>  
> @@ -306,7 +306,7 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object)
>  static inline void kasan_kfree_large(void *ptr) {}
>  static inline void kasan_slab_free_mempool(void *ptr) {}
>  static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> -				   gfp_t flags)
> +				   gfp_t flags, bool init)
>  {
>  	return object;
>  }
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 6107c795611f..7ea747b18c26 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -428,7 +428,7 @@ static void set_alloc_info(struct kmem_cache *cache, void *object,
>  }
>  
>  void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
> -					void *object, gfp_t flags)
> +					void *object, gfp_t flags, bool init)
>  {
>  	u8 tag;
>  	void *tagged_object;
> @@ -453,7 +453,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
>  	 * Unpoison the whole object.
>  	 * For kmalloc() allocations, kasan_kmalloc() will do precise poisoning.
>  	 */
> -	kasan_unpoison(tagged_object, cache->object_size, false);
> +	kasan_unpoison(tagged_object, cache->object_size, init);
>  
>  	/* Save alloc info (if possible) for non-kmalloc() allocations. */
>  	if (kasan_stack_collection_enabled())
> diff --git a/mm/slab.c b/mm/slab.c
> index 51fd424e0d6d..936dd686dec9 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -3216,6 +3216,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_
>  	void *ptr;
>  	int slab_node = numa_mem_id();
>  	struct obj_cgroup *objcg = NULL;
> +	bool init = false;
>  
>  	flags &= gfp_allowed_mask;
>  	cachep = slab_pre_alloc_hook(cachep, &objcg, 1, flags);
> @@ -3254,12 +3255,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_
>    out:
>  	local_irq_restore(save_flags);
>  	ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller);
> -
> -	if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr)
> -		memset(ptr, 0, cachep->object_size);
> +	init = slab_want_init_on_alloc(flags, cachep);
>  
>  out_hooks:
> -	slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr);
> +	slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init);
>  	return ptr;
>  }
>  
> @@ -3301,6 +3300,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned lo
>  	unsigned long save_flags;
>  	void *objp;
>  	struct obj_cgroup *objcg = NULL;
> +	bool init = false;
>  
>  	flags &= gfp_allowed_mask;
>  	cachep = slab_pre_alloc_hook(cachep, &objcg, 1, flags);
> @@ -3317,12 +3317,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned lo
>  	local_irq_restore(save_flags);
>  	objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller);
>  	prefetchw(objp);
> -
> -	if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp)
> -		memset(objp, 0, cachep->object_size);
> +	init = slab_want_init_on_alloc(flags, cachep);
>  
>  out:
> -	slab_post_alloc_hook(cachep, objcg, flags, 1, &objp);
> +	slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init);
>  	return objp;
>  }
>  
> @@ -3542,18 +3540,18 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
>  
>  	cache_alloc_debugcheck_after_bulk(s, flags, size, p, _RET_IP_);
>  
> -	/* Clear memory outside IRQ disabled section */
> -	if (unlikely(slab_want_init_on_alloc(flags, s)))
> -		for (i = 0; i < size; i++)
> -			memset(p[i], 0, s->object_size);
> -
> -	slab_post_alloc_hook(s, objcg, flags, size, p);
> +	/*
> +	 * memcg and kmem_cache debug support and memory initialization.
> +	 * Done outside of the IRQ disabled section.
> +	 */
> +	slab_post_alloc_hook(s, objcg, flags, size, p,
> +				slab_want_init_on_alloc(flags, s));
>  	/* FIXME: Trace call missing. Christoph would like a bulk variant */
>  	return size;
>  error:
>  	local_irq_enable();
>  	cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_);
> -	slab_post_alloc_hook(s, objcg, flags, i, p);
> +	slab_post_alloc_hook(s, objcg, flags, i, p, false);
>  	__kmem_cache_free_bulk(s, i, p);
>  	return 0;
>  }
> diff --git a/mm/slab.h b/mm/slab.h
> index 076582f58f68..0116a314cd21 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -506,15 +506,24 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
>  }
>  
>  static inline void slab_post_alloc_hook(struct kmem_cache *s,
> -					struct obj_cgroup *objcg,
> -					gfp_t flags, size_t size, void **p)
> +					struct obj_cgroup *objcg, gfp_t flags,
> +					size_t size, void **p, bool init)
>  {
>  	size_t i;
>  
>  	flags &= gfp_allowed_mask;
> +
> +	/*
> +	 * As memory initialization is integrated with hardware tag-based
> +	 * KASAN, kasan_slab_alloc and initialization memset must be
> +	 * kept together to avoid discrepancies in behavior.
> +	 *
> +	 * As p[i] might get tagged, memset and kmemleak hook come after KASAN.
> +	 */
>  	for (i = 0; i < size; i++) {
> -		p[i] = kasan_slab_alloc(s, p[i], flags);
> -		/* As p[i] might get tagged, call kmemleak hook after KASAN. */
> +		p[i] = kasan_slab_alloc(s, p[i], flags, init);
> +		if (p[i] && init && !IS_ENABLED(CONFIG_KASAN_HW_TAGS))

Same as the other patch, it seems awkward to have
!IS_ENABLED(CONFIG_KASAN_HW_TAGS) rather than a kasan_*() helper that
returns this information.

> +			memset(p[i], 0, s->object_size);
>  		kmemleak_alloc_recursive(p[i], s->object_size, 1,
>  					 s->flags, flags);
>  	}
> diff --git a/mm/slub.c b/mm/slub.c
> index e26c274b4657..f53df23760e3 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2822,6 +2822,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
>  	struct page *page;
>  	unsigned long tid;
>  	struct obj_cgroup *objcg = NULL;
> +	bool init = false;
>  
>  	s = slab_pre_alloc_hook(s, &objcg, 1, gfpflags);
>  	if (!s)
> @@ -2899,12 +2900,10 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
>  	}
>  
>  	maybe_wipe_obj_freeptr(s, object);
> -
> -	if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object)
> -		memset(kasan_reset_tag(object), 0, s->object_size);
> +	init = slab_want_init_on_alloc(gfpflags, s);
>  
>  out:
> -	slab_post_alloc_hook(s, objcg, gfpflags, 1, &object);
> +	slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init);
>  
>  	return object;
>  }
> @@ -3356,20 +3355,16 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
>  	c->tid = next_tid(c->tid);
>  	local_irq_enable();
>  
> -	/* Clear memory outside IRQ disabled fastpath loop */
> -	if (unlikely(slab_want_init_on_alloc(flags, s))) {
> -		int j;
> -
> -		for (j = 0; j < i; j++)
> -			memset(kasan_reset_tag(p[j]), 0, s->object_size);
> -	}
> -
> -	/* memcg and kmem_cache debug support */
> -	slab_post_alloc_hook(s, objcg, flags, size, p);
> +	/*
> +	 * memcg and kmem_cache debug support and memory initialization.
> +	 * Done outside of the IRQ disabled fastpath loop.
> +	 */
> +	slab_post_alloc_hook(s, objcg, flags, size, p,
> +				slab_want_init_on_alloc(flags, s));
>  	return i;
>  error:
>  	local_irq_enable();
> -	slab_post_alloc_hook(s, objcg, flags, i, p);
> +	slab_post_alloc_hook(s, objcg, flags, i, p, false);
>  	__kmem_cache_free_bulk(s, i, p);
>  	return 0;
>  }
> @@ -3579,7 +3574,7 @@ static void early_kmem_cache_node_alloc(int node)
>  	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
>  	init_tracking(kmem_cache_node, n);
>  #endif
> -	n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL);
> +	n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL, false);
>  	page->freelist = get_freepointer(kmem_cache_node, n);
>  	page->inuse = 1;
>  	page->frozen = 0;
> -- 
> 2.30.1.766.gb4fecdf3b7-goog
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/5] kasan, mm: integrate slab init_on_free with HW_TAGS
  2021-03-06  0:15 ` [PATCH 5/5] kasan, mm: integrate slab init_on_free " Andrey Konovalov
@ 2021-03-08 11:45   ` Marco Elver
  2021-03-08 14:16     ` Andrey Konovalov
  0 siblings, 1 reply; 14+ messages in thread
From: Marco Elver @ 2021-03-08 11:45 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Alexander Potapenko, Andrew Morton, Catalin Marinas, Will Deacon,
	Vincenzo Frascino, Dmitry Vyukov, Andrey Ryabinin,
	Peter Collingbourne, Evgenii Stepanov, Branislav Rankov,
	Kevin Brodsky, kasan-dev, linux-arm-kernel, linux-mm,
	linux-kernel

On Sat, Mar 06, 2021 at 01:15AM +0100, Andrey Konovalov wrote:
> This change uses the previously added memory initialization feature
> of HW_TAGS KASAN routines for slab memory when init_on_free is enabled.
> 
> With this change, memory initialization memset() is no longer called
> when both HW_TAGS KASAN and init_on_free are enabled. Instead, memory
> is initialized in KASAN runtime.
> 
> For SLUB, the memory initialization memset() is moved into
> slab_free_hook() that currently directly follows the initialization loop.
> A new argument is added to slab_free_hook() that indicates whether to
> initialize the memory or not.
> 
> To avoid discrepancies with which memory gets initialized that can be
> caused by future changes, both KASAN hook and initialization memset()
> are put together and a warning comment is added.
> 
> Combining setting allocation tags with memory initialization improves
> HW_TAGS KASAN performance when init_on_free is enabled.
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  include/linux/kasan.h | 10 ++++++----
>  mm/kasan/common.c     | 13 +++++++------
>  mm/slab.c             | 15 +++++++++++----
>  mm/slub.c             | 43 ++++++++++++++++++++++++-------------------
>  4 files changed, 48 insertions(+), 33 deletions(-)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index bb756f6c73b5..1df0f7f0b493 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -193,11 +193,13 @@ static __always_inline void * __must_check kasan_init_slab_obj(
>  	return (void *)object;
>  }
>  
> -bool __kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
> -static __always_inline bool kasan_slab_free(struct kmem_cache *s, void *object)
> +bool __kasan_slab_free(struct kmem_cache *s, void *object,
> +			unsigned long ip, bool init);
> +static __always_inline bool kasan_slab_free(struct kmem_cache *s,
> +						void *object, bool init)
>  {
>  	if (kasan_enabled())
> -		return __kasan_slab_free(s, object, _RET_IP_);
> +		return __kasan_slab_free(s, object, _RET_IP_, init);
>  	return false;
>  }
>  
> @@ -299,7 +301,7 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
>  {
>  	return (void *)object;
>  }
> -static inline bool kasan_slab_free(struct kmem_cache *s, void *object)
> +static inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init)
>  {
>  	return false;
>  }
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 7ea747b18c26..623cf94288a2 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -322,8 +322,8 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
>  	return (void *)object;
>  }
>  
> -static inline bool ____kasan_slab_free(struct kmem_cache *cache,
> -				void *object, unsigned long ip, bool quarantine)
> +static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> +				unsigned long ip, bool quarantine, bool init)
>  {
>  	u8 tag;
>  	void *tagged_object;
> @@ -351,7 +351,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache,
>  	}
>  
>  	kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE),
> -			KASAN_KMALLOC_FREE, false);
> +			KASAN_KMALLOC_FREE, init);
>  
>  	if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine))
>  		return false;
> @@ -362,9 +362,10 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache,
>  	return kasan_quarantine_put(cache, object);
>  }
>  
> -bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> +bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> +				unsigned long ip, bool init)
>  {
> -	return ____kasan_slab_free(cache, object, ip, true);
> +	return ____kasan_slab_free(cache, object, ip, true, init);
>  }
>  
>  static inline bool ____kasan_kfree_large(void *ptr, unsigned long ip)
> @@ -409,7 +410,7 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
>  			return;
>  		kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE, false);
>  	} else {
> -		____kasan_slab_free(page->slab_cache, ptr, ip, false);
> +		____kasan_slab_free(page->slab_cache, ptr, ip, false, false);
>  	}
>  }
>  
> diff --git a/mm/slab.c b/mm/slab.c
> index 936dd686dec9..d12ce9e5c3ed 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -3425,17 +3425,24 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac)
>  static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp,
>  					 unsigned long caller)
>  {
> +	bool init;
> +
>  	if (is_kfence_address(objp)) {
>  		kmemleak_free_recursive(objp, cachep->flags);
>  		__kfence_free(objp);
>  		return;
>  	}
>  
> -	if (unlikely(slab_want_init_on_free(cachep)))
> +	/*
> +	 * As memory initialization is integrated with hardware tag-based

This may no longer be true if the HW-tags architecture doesn't support
init (although currently it is certainly true). 

Perhaps: "As memory initialization may be accelerated by some KASAN
implementations (such as some HW_TAGS architectures) ..."

or whatever else is appropriate.

> +	 * KASAN, kasan_slab_free and initialization memset must be
> +	 * kept together to avoid discrepancies in behavior.
> +	 */
> +	init = slab_want_init_on_free(cachep);
> +	if (init && !IS_ENABLED(CONFIG_KASAN_HW_TAGS))
>  		memset(objp, 0, cachep->object_size);

Same as the other patch, it seems awkward to have
!IS_ENABLED(CONFIG_KASAN_HW_TAGS) rather than a kasan_*() helper that
returns this information.

> -
> -	/* Put the object into the quarantine, don't touch it for now. */
> -	if (kasan_slab_free(cachep, objp))
> +	/* KASAN might put objp into memory quarantine, delaying its reuse. */
> +	if (kasan_slab_free(cachep, objp, init))
>  		return;
>  
>  	/* Use KCSAN to help debug racy use-after-free. */
> diff --git a/mm/slub.c b/mm/slub.c
> index f53df23760e3..c2755670d6bd 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1532,7 +1532,8 @@ static __always_inline void kfree_hook(void *x)
>  	kasan_kfree_large(x);
>  }
>  
> -static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x)
> +static __always_inline bool slab_free_hook(struct kmem_cache *s,
> +						void *x, bool init)
>  {
>  	kmemleak_free_recursive(x, s->flags);
>  
> @@ -1558,8 +1559,25 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x)
>  		__kcsan_check_access(x, s->object_size,
>  				     KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT);
>  
> -	/* KASAN might put x into memory quarantine, delaying its reuse */
> -	return kasan_slab_free(s, x);
> +	/*
> +	 * As memory initialization is integrated with hardware tag-based
> +	 * KASAN, kasan_slab_free and initialization memset's must be
> +	 * kept together to avoid discrepancies in behavior.
> +	 *
> +	 * The initialization memset's clear the object and the metadata,
> +	 * but don't touch the SLAB redzone.
> +	 */
> +	if (init) {
> +		int rsize;
> +
> +		if (!IS_ENABLED(CONFIG_KASAN_HW_TAGS))
> +			memset(kasan_reset_tag(x), 0, s->object_size);
> +		rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad : 0;
> +		memset((char *)kasan_reset_tag(x) + s->inuse, 0,
> +		       s->size - s->inuse - rsize);
> +	}
> +	/* KASAN might put x into memory quarantine, delaying its reuse. */
> +	return kasan_slab_free(s, x, init);
>  }
>  
>  static inline bool slab_free_freelist_hook(struct kmem_cache *s,
> @@ -1569,10 +1587,9 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
>  	void *object;
>  	void *next = *head;
>  	void *old_tail = *tail ? *tail : *head;
> -	int rsize;
>  
>  	if (is_kfence_address(next)) {
> -		slab_free_hook(s, next);
> +		slab_free_hook(s, next, false);
>  		return true;
>  	}
>  
> @@ -1584,20 +1601,8 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
>  		object = next;
>  		next = get_freepointer(s, object);
>  
> -		if (slab_want_init_on_free(s)) {
> -			/*
> -			 * Clear the object and the metadata, but don't touch
> -			 * the redzone.
> -			 */
> -			memset(kasan_reset_tag(object), 0, s->object_size);
> -			rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad
> -							   : 0;
> -			memset((char *)kasan_reset_tag(object) + s->inuse, 0,
> -			       s->size - s->inuse - rsize);
> -
> -		}
>  		/* If object's reuse doesn't have to be delayed */
> -		if (!slab_free_hook(s, object)) {
> +		if (!slab_free_hook(s, object, slab_want_init_on_free(s))) {
>  			/* Move object to the new freelist */
>  			set_freepointer(s, object, *head);
>  			*head = object;
> @@ -3235,7 +3240,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
>  	}
>  
>  	if (is_kfence_address(object)) {
> -		slab_free_hook(df->s, object);
> +		slab_free_hook(df->s, object, false);
>  		__kfence_free(object);
>  		p[size] = NULL; /* mark object processed */
>  		return size;
> -- 
> 2.30.1.766.gb4fecdf3b7-goog
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/5] kasan, mm: integrate page_alloc init with HW_TAGS
  2021-03-08 11:35   ` Marco Elver
@ 2021-03-08 11:50     ` Marco Elver
  2021-03-08 14:14     ` Andrey Konovalov
  1 sibling, 0 replies; 14+ messages in thread
From: Marco Elver @ 2021-03-08 11:50 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Alexander Potapenko, Andrew Morton, Catalin Marinas, Will Deacon,
	Vincenzo Frascino, Dmitry Vyukov, Andrey Ryabinin,
	Peter Collingbourne, Evgenii Stepanov, Branislav Rankov,
	Kevin Brodsky, kasan-dev, Linux ARM,
	Linux Memory Management List, LKML

On Mon, 8 Mar 2021 at 12:35, Marco Elver <elver@google.com> wrote:
[...]
> Could we instead add a static inline helper to <linux/kasan.h>, e.g.
> kasan_supports_init() or so?

Hmm, KASAN certainly "supports" memory initialization always. So maybe
"kasan_has_accelerated_init()" is more accurate?  I leave it to you to
decide what the best option is.

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/5] kasan, mm: integrate page_alloc init with HW_TAGS
  2021-03-08 11:35   ` Marco Elver
  2021-03-08 11:50     ` Marco Elver
@ 2021-03-08 14:14     ` Andrey Konovalov
  1 sibling, 0 replies; 14+ messages in thread
From: Andrey Konovalov @ 2021-03-08 14:14 UTC (permalink / raw)
  To: Marco Elver
  Cc: Alexander Potapenko, Andrew Morton, Catalin Marinas, Will Deacon,
	Vincenzo Frascino, Dmitry Vyukov, Andrey Ryabinin,
	Peter Collingbourne, Evgenii Stepanov, Branislav Rankov,
	Kevin Brodsky, kasan-dev, Linux ARM,
	Linux Memory Management List, LKML

On Mon, Mar 8, 2021 at 12:35 PM Marco Elver <elver@google.com> wrote:
>
> > -     kasan_free_nondeferred_pages(page, order, fpi_flags);
> > +     init = want_init_on_free();
> > +     if (init && !IS_ENABLED(CONFIG_KASAN_HW_TAGS))
>
> Doing the !IS_ENABLED(CONFIG_KASAN_HW_TAGS) check is awkward, and
> assumes internal knowledge of the KASAN implementation and how all
> current and future architectures that support HW_TAGS work.
>
> Could we instead add a static inline helper to <linux/kasan.h>, e.g.
> kasan_supports_init() or so?
>
> That way, these checks won't grow uncontrollable if a future
> architecture implements HW_TAGS but not init.

Good idea, I'll add a helper in v2.

> Hmm, KASAN certainly "supports" memory initialization always. So maybe
> "kasan_has_accelerated_init()" is more accurate?  I leave it to you to
> decide what the best option is.

Let's call it kasan_has_integrated_init().

Thanks!

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/5] kasan, mm: integrate slab init_on_free with HW_TAGS
  2021-03-08 11:45   ` Marco Elver
@ 2021-03-08 14:16     ` Andrey Konovalov
  0 siblings, 0 replies; 14+ messages in thread
From: Andrey Konovalov @ 2021-03-08 14:16 UTC (permalink / raw)
  To: Marco Elver
  Cc: Alexander Potapenko, Andrew Morton, Catalin Marinas, Will Deacon,
	Vincenzo Frascino, Dmitry Vyukov, Andrey Ryabinin,
	Peter Collingbourne, Evgenii Stepanov, Branislav Rankov,
	Kevin Brodsky, kasan-dev, Linux ARM,
	Linux Memory Management List, LKML

On Mon, Mar 8, 2021 at 12:45 PM Marco Elver <elver@google.com> wrote:
>
> >
> > -     if (unlikely(slab_want_init_on_free(cachep)))
> > +     /*
> > +      * As memory initialization is integrated with hardware tag-based
>
> This may no longer be true if the HW-tags architecture doesn't support
> init (although currently it is certainly true).
>
> Perhaps: "As memory initialization may be accelerated by some KASAN
> implementations (such as some HW_TAGS architectures) ..."
>
> or whatever else is appropriate.

Will change the comment here and in other patches, thanks!

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-03-08 14:17 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-06  0:15 [PATCH 0/5] kasan: integrate with init_on_alloc/free Andrey Konovalov
2021-03-06  0:15 ` [PATCH 1/5] arm64: kasan: allow to init memory when setting tags Andrey Konovalov
2021-03-08 11:22   ` Marco Elver
2021-03-06  0:15 ` [PATCH 2/5] kasan: init memory in kasan_(un)poison for HW_TAGS Andrey Konovalov
2021-03-08 11:26   ` Marco Elver
2021-03-06  0:15 ` [PATCH 3/5] kasan, mm: integrate page_alloc init with HW_TAGS Andrey Konovalov
2021-03-08 11:35   ` Marco Elver
2021-03-08 11:50     ` Marco Elver
2021-03-08 14:14     ` Andrey Konovalov
2021-03-06  0:15 ` [PATCH 4/5] kasan, mm: integrate slab init_on_alloc " Andrey Konovalov
2021-03-08 11:36   ` Marco Elver
2021-03-06  0:15 ` [PATCH 5/5] kasan, mm: integrate slab init_on_free " Andrey Konovalov
2021-03-08 11:45   ` Marco Elver
2021-03-08 14:16     ` Andrey Konovalov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).