All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata
@ 2022-06-13 20:13 andrey.konovalov
  2022-06-13 20:13 ` [PATCH 01/32] kasan: check KASAN_NO_FREE_META in __kasan_metadata_size andrey.konovalov
                   ` (32 more replies)
  0 siblings, 33 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:13 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

This series makes the tag-based KASAN modes use a ring buffer for storing
stack depot handles for alloc/free stack traces for slab objects instead
of per-object metadata. This ring buffer is referred to as the stack ring.

On each alloc/free of a slab object, the tagged address of the object and
the current stack trace are recorded in the stack ring.

On each bug report, if the accessed address belongs to a slab object, the
stack ring is scanned for matching entries. The newest entries are used to
print the alloc/free stack traces in the report: one entry for alloc and
one for free.

The ring buffer is lock-free.

The advantages of this approach over storing stack trace handles in
per-object metadata with the tag-based KASAN modes:

- Allows to find relevant stack traces for use-after-free bugs without
  using quarantine for freed memory. (Currently, if the object was
  reallocated multiple times, the report contains the latest alloc/free
  stack traces, not necessarily the ones relevant to the buggy allocation.)
- Allows to better identify and mark use-after-free bugs, effectively
  making the CONFIG_KASAN_TAGS_IDENTIFY functionality always-on.
- Has fixed memory overhead.

The disadvantage:

- If the affected object was allocated/freed long before the bug happened
  and the stack trace events were purged from the stack ring, the report
  will have no stack traces.

Discussion
==========

The current implementation of the stack ring uses a single ring buffer for
the whole kernel. This might lead to contention due to atomic accesses to
the ring buffer index on multicore systems.

It is unclear to me whether the performance impact from this contention
is significant compared to the slowdown introduced by collecting stack
traces.

While these patches are being reviewed, I will do some tests on the arm64
hardware that I have. However, I do not have a large multicore arm64
system to do proper measurements.

A considered alternative is to keep a separate ring buffer for each CPU
and then iterate over all of them when printing a bug report. This approach
requires somehow figuring out which of the stack rings has the freshest
stack traces for an object if multiple stack rings have them.

Further plans
=============

This series is a part of an effort to make KASAN stack trace collection
suitable for production. This requires stack trace collection to be fast
and memory-bounded.

The planned steps are:

1. Speed up stack trace collection (potentially, by using SCS;
   patches on-hold until steps #2 and #3 are completed).
2. Keep stack trace handles in the stack ring (this series).
3. Add a memory-bounded mode to stack depot or provide an alternative
   memory-bounded stack storage.
4. Potentially, implement stack trace collection sampling to minimize
   the performance impact.

Thanks!

Andrey Konovalov (32):
  kasan: check KASAN_NO_FREE_META in __kasan_metadata_size
  kasan: rename kasan_set_*_info to kasan_save_*_info
  kasan: move is_kmalloc check out of save_alloc_info
  kasan: split save_alloc_info implementations
  kasan: drop CONFIG_KASAN_TAGS_IDENTIFY
  kasan: introduce kasan_print_aux_stacks
  kasan: introduce kasan_get_alloc_track
  kasan: introduce kasan_init_object_meta
  kasan: clear metadata functions for tag-based modes
  kasan: move kasan_get_*_meta to generic.c
  kasan: introduce kasan_requires_meta
  kasan: introduce kasan_init_cache_meta
  kasan: drop CONFIG_KASAN_GENERIC check from kasan_init_cache_meta
  kasan: only define kasan_metadata_size for Generic mode
  kasan: only define kasan_never_merge for Generic mode
  kasan: only define metadata offsets for Generic mode
  kasan: only define metadata structs for Generic mode
  kasan: only define kasan_cache_create for Generic mode
  kasan: pass tagged pointers to kasan_save_alloc/free_info
  kasan: move kasan_get_alloc/free_track definitions
  kasan: simplify invalid-free reporting
  kasan: cosmetic changes in report.c
  kasan: use kasan_addr_to_slab in print_address_description
  kasan: move kasan_addr_to_slab to common.c
  kasan: make kasan_addr_to_page static
  kasan: simplify print_report
  kasan: introduce complete_report_info
  kasan: fill in cache and object in complete_report_info
  kasan: rework function arguments in report.c
  kasan: introduce kasan_complete_mode_report_info
  kasan: implement stack ring for tag-based modes
  kasan: better identify bug types for tag-based modes

 include/linux/kasan.h     |  55 +++++-------
 include/linux/slab.h      |   2 +-
 lib/Kconfig.kasan         |   8 --
 mm/kasan/common.c         | 173 ++++----------------------------------
 mm/kasan/generic.c        | 154 ++++++++++++++++++++++++++++++---
 mm/kasan/kasan.h          | 138 ++++++++++++++++++++----------
 mm/kasan/report.c         | 130 +++++++++++++---------------
 mm/kasan/report_generic.c |  45 +++++++++-
 mm/kasan/report_tags.c    | 114 ++++++++++++++++++-------
 mm/kasan/tags.c           |  61 +++++++-------
 10 files changed, 491 insertions(+), 389 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 54+ messages in thread

* [PATCH 01/32] kasan: check KASAN_NO_FREE_META in __kasan_metadata_size
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
@ 2022-06-13 20:13 ` andrey.konovalov
  2022-06-20 13:39   ` Marco Elver
  2022-06-13 20:13 ` [PATCH 02/32] kasan: rename kasan_set_*_info to kasan_save_*_info andrey.konovalov
                   ` (31 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:13 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

__kasan_metadata_size() calculates the size of the redzone for objects
in a slab cache.

When accounting for presence of kasan_free_meta in the redzone, this
function only compares free_meta_offset with 0. But free_meta_offset could
also be equal to KASAN_NO_FREE_META, which indicates that kasan_free_meta
is not present at all.

Add a comparison with KASAN_NO_FREE_META into __kasan_metadata_size().

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

---

This is a minor fix that only affects slub_debug runs, so it is probably
not worth backporting.
---
 mm/kasan/common.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index c40c0e7b3b5f..968d2365d8c1 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -223,8 +223,9 @@ size_t __kasan_metadata_size(struct kmem_cache *cache)
 		return 0;
 	return (cache->kasan_info.alloc_meta_offset ?
 		sizeof(struct kasan_alloc_meta) : 0) +
-		(cache->kasan_info.free_meta_offset ?
-		sizeof(struct kasan_free_meta) : 0);
+		((cache->kasan_info.free_meta_offset &&
+		  cache->kasan_info.free_meta_offset != KASAN_NO_FREE_META) ?
+		 sizeof(struct kasan_free_meta) : 0);
 }
 
 struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 02/32] kasan: rename kasan_set_*_info to kasan_save_*_info
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
  2022-06-13 20:13 ` [PATCH 01/32] kasan: check KASAN_NO_FREE_META in __kasan_metadata_size andrey.konovalov
@ 2022-06-13 20:13 ` andrey.konovalov
  2022-06-20 13:39   ` Marco Elver
  2022-06-13 20:13 ` [PATCH 03/32] kasan: move is_kmalloc check out of save_alloc_info andrey.konovalov
                   ` (30 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:13 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Rename set_alloc_info() and kasan_set_free_info() to save_alloc_info()
and kasan_save_free_info(). The new names make more sense.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/common.c  | 8 ++++----
 mm/kasan/generic.c | 2 +-
 mm/kasan/kasan.h   | 2 +-
 mm/kasan/tags.c    | 2 +-
 4 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 968d2365d8c1..753775b894b6 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -364,7 +364,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
 		return false;
 
 	if (kasan_stack_collection_enabled())
-		kasan_set_free_info(cache, object, tag);
+		kasan_save_free_info(cache, object, tag);
 
 	return kasan_quarantine_put(cache, object);
 }
@@ -423,7 +423,7 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
 	}
 }
 
-static void set_alloc_info(struct kmem_cache *cache, void *object,
+static void save_alloc_info(struct kmem_cache *cache, void *object,
 				gfp_t flags, bool is_kmalloc)
 {
 	struct kasan_alloc_meta *alloc_meta;
@@ -467,7 +467,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
 
 	/* Save alloc info (if possible) for non-kmalloc() allocations. */
 	if (kasan_stack_collection_enabled())
-		set_alloc_info(cache, (void *)object, flags, false);
+		save_alloc_info(cache, (void *)object, flags, false);
 
 	return tagged_object;
 }
@@ -513,7 +513,7 @@ static inline void *____kasan_kmalloc(struct kmem_cache *cache,
 	 * This also rewrites the alloc info when called from kasan_krealloc().
 	 */
 	if (kasan_stack_collection_enabled())
-		set_alloc_info(cache, (void *)object, flags, true);
+		save_alloc_info(cache, (void *)object, flags, true);
 
 	/* Keep the tag that was set by kasan_slab_alloc(). */
 	return (void *)object;
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 437fcc7e77cf..03a3770cfeae 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -358,7 +358,7 @@ void kasan_record_aux_stack_noalloc(void *addr)
 	return __kasan_record_aux_stack(addr, false);
 }
 
-void kasan_set_free_info(struct kmem_cache *cache,
+void kasan_save_free_info(struct kmem_cache *cache,
 				void *object, u8 tag)
 {
 	struct kasan_free_meta *free_meta;
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 610d60d6e5b8..6df8d7b01073 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -284,7 +284,7 @@ struct slab *kasan_addr_to_slab(const void *addr);
 
 depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
 void kasan_set_track(struct kasan_track *track, gfp_t flags);
-void kasan_set_free_info(struct kmem_cache *cache, void *object, u8 tag);
+void kasan_save_free_info(struct kmem_cache *cache, void *object, u8 tag);
 struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
 				void *object, u8 tag);
 
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 8f48b9502a17..b453a353bc86 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -17,7 +17,7 @@
 
 #include "kasan.h"
 
-void kasan_set_free_info(struct kmem_cache *cache,
+void kasan_save_free_info(struct kmem_cache *cache,
 				void *object, u8 tag)
 {
 	struct kasan_alloc_meta *alloc_meta;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 03/32] kasan: move is_kmalloc check out of save_alloc_info
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
  2022-06-13 20:13 ` [PATCH 01/32] kasan: check KASAN_NO_FREE_META in __kasan_metadata_size andrey.konovalov
  2022-06-13 20:13 ` [PATCH 02/32] kasan: rename kasan_set_*_info to kasan_save_*_info andrey.konovalov
@ 2022-06-13 20:13 ` andrey.konovalov
  2022-06-20 13:39   ` Marco Elver
  2022-06-13 20:13 ` [PATCH 04/32] kasan: split save_alloc_info implementations andrey.konovalov
                   ` (29 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:13 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Move kasan_info.is_kmalloc check out of save_alloc_info().

This is a preparatory change that simplifies the following patches
in this series.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/common.c | 15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 753775b894b6..a6107e8375e0 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -423,15 +423,10 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
 	}
 }
 
-static void save_alloc_info(struct kmem_cache *cache, void *object,
-				gfp_t flags, bool is_kmalloc)
+static void save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
 {
 	struct kasan_alloc_meta *alloc_meta;
 
-	/* Don't save alloc info for kmalloc caches in kasan_slab_alloc(). */
-	if (cache->kasan_info.is_kmalloc && !is_kmalloc)
-		return;
-
 	alloc_meta = kasan_get_alloc_meta(cache, object);
 	if (alloc_meta)
 		kasan_set_track(&alloc_meta->alloc_track, flags);
@@ -466,8 +461,8 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
 	kasan_unpoison(tagged_object, cache->object_size, init);
 
 	/* Save alloc info (if possible) for non-kmalloc() allocations. */
-	if (kasan_stack_collection_enabled())
-		save_alloc_info(cache, (void *)object, flags, false);
+	if (kasan_stack_collection_enabled() && !cache->kasan_info.is_kmalloc)
+		save_alloc_info(cache, (void *)object, flags);
 
 	return tagged_object;
 }
@@ -512,8 +507,8 @@ static inline void *____kasan_kmalloc(struct kmem_cache *cache,
 	 * Save alloc info (if possible) for kmalloc() allocations.
 	 * This also rewrites the alloc info when called from kasan_krealloc().
 	 */
-	if (kasan_stack_collection_enabled())
-		save_alloc_info(cache, (void *)object, flags, true);
+	if (kasan_stack_collection_enabled() && cache->kasan_info.is_kmalloc)
+		save_alloc_info(cache, (void *)object, flags);
 
 	/* Keep the tag that was set by kasan_slab_alloc(). */
 	return (void *)object;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 04/32] kasan: split save_alloc_info implementations
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (2 preceding siblings ...)
  2022-06-13 20:13 ` [PATCH 03/32] kasan: move is_kmalloc check out of save_alloc_info andrey.konovalov
@ 2022-06-13 20:13 ` andrey.konovalov
  2022-06-20 13:39   ` Marco Elver
  2022-06-13 20:13 ` [PATCH 05/32] kasan: drop CONFIG_KASAN_TAGS_IDENTIFY andrey.konovalov
                   ` (28 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:13 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Provide standalone implementations of save_alloc_info() for the Generic
and tag-based modes.

For now, the implementations are the same, but they will diverge later
in the series.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/common.c  | 13 ++-----------
 mm/kasan/generic.c |  9 +++++++++
 mm/kasan/kasan.h   |  1 +
 mm/kasan/tags.c    |  9 +++++++++
 4 files changed, 21 insertions(+), 11 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a6107e8375e0..2848c7a2402a 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -423,15 +423,6 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
 	}
 }
 
-static void save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
-{
-	struct kasan_alloc_meta *alloc_meta;
-
-	alloc_meta = kasan_get_alloc_meta(cache, object);
-	if (alloc_meta)
-		kasan_set_track(&alloc_meta->alloc_track, flags);
-}
-
 void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
 					void *object, gfp_t flags, bool init)
 {
@@ -462,7 +453,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
 
 	/* Save alloc info (if possible) for non-kmalloc() allocations. */
 	if (kasan_stack_collection_enabled() && !cache->kasan_info.is_kmalloc)
-		save_alloc_info(cache, (void *)object, flags);
+		kasan_save_alloc_info(cache, (void *)object, flags);
 
 	return tagged_object;
 }
@@ -508,7 +499,7 @@ static inline void *____kasan_kmalloc(struct kmem_cache *cache,
 	 * This also rewrites the alloc info when called from kasan_krealloc().
 	 */
 	if (kasan_stack_collection_enabled() && cache->kasan_info.is_kmalloc)
-		save_alloc_info(cache, (void *)object, flags);
+		kasan_save_alloc_info(cache, (void *)object, flags);
 
 	/* Keep the tag that was set by kasan_slab_alloc(). */
 	return (void *)object;
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 03a3770cfeae..98c451a3b01f 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -358,6 +358,15 @@ void kasan_record_aux_stack_noalloc(void *addr)
 	return __kasan_record_aux_stack(addr, false);
 }
 
+void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
+{
+	struct kasan_alloc_meta *alloc_meta;
+
+	alloc_meta = kasan_get_alloc_meta(cache, object);
+	if (alloc_meta)
+		kasan_set_track(&alloc_meta->alloc_track, flags);
+}
+
 void kasan_save_free_info(struct kmem_cache *cache,
 				void *object, u8 tag)
 {
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 6df8d7b01073..610057e651d2 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -284,6 +284,7 @@ struct slab *kasan_addr_to_slab(const void *addr);
 
 depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
 void kasan_set_track(struct kasan_track *track, gfp_t flags);
+void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
 void kasan_save_free_info(struct kmem_cache *cache, void *object, u8 tag);
 struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
 				void *object, u8 tag);
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index b453a353bc86..1ba3c8399f72 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -17,6 +17,15 @@
 
 #include "kasan.h"
 
+void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
+{
+	struct kasan_alloc_meta *alloc_meta;
+
+	alloc_meta = kasan_get_alloc_meta(cache, object);
+	if (alloc_meta)
+		kasan_set_track(&alloc_meta->alloc_track, flags);
+}
+
 void kasan_save_free_info(struct kmem_cache *cache,
 				void *object, u8 tag)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 05/32] kasan: drop CONFIG_KASAN_TAGS_IDENTIFY
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (3 preceding siblings ...)
  2022-06-13 20:13 ` [PATCH 04/32] kasan: split save_alloc_info implementations andrey.konovalov
@ 2022-06-13 20:13 ` andrey.konovalov
  2022-06-20 13:39   ` Marco Elver
  2022-06-13 20:13 ` [PATCH 06/32] kasan: introduce kasan_print_aux_stacks andrey.konovalov
                   ` (27 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:13 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Drop CONFIG_KASAN_TAGS_IDENTIFY and related code to simplify making
changes to the reporting code.

The dropped functionality will be restored in the following patches in
this series.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 lib/Kconfig.kasan      |  8 --------
 mm/kasan/kasan.h       | 12 +-----------
 mm/kasan/report_tags.c | 28 ----------------------------
 mm/kasan/tags.c        | 21 ++-------------------
 4 files changed, 3 insertions(+), 66 deletions(-)

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f0973da583e0..ca09b1cf8ee9 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -167,14 +167,6 @@ config KASAN_STACK
 	  as well, as it adds inline-style instrumentation that is run
 	  unconditionally.
 
-config KASAN_TAGS_IDENTIFY
-	bool "Memory corruption type identification"
-	depends on KASAN_SW_TAGS || KASAN_HW_TAGS
-	help
-	  Enables best-effort identification of the bug types (use-after-free
-	  or out-of-bounds) at the cost of increased memory consumption.
-	  Only applicable for the tag-based KASAN modes.
-
 config KASAN_VMALLOC
 	bool "Check accesses to vmalloc allocations"
 	depends on HAVE_ARCH_KASAN_VMALLOC
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 610057e651d2..aa6b43936f8d 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -168,23 +168,13 @@ struct kasan_track {
 	depot_stack_handle_t stack;
 };
 
-#if defined(CONFIG_KASAN_TAGS_IDENTIFY) && defined(CONFIG_KASAN_SW_TAGS)
-#define KASAN_NR_FREE_STACKS 5
-#else
-#define KASAN_NR_FREE_STACKS 1
-#endif
-
 struct kasan_alloc_meta {
 	struct kasan_track alloc_track;
 	/* Generic mode stores free track in kasan_free_meta. */
 #ifdef CONFIG_KASAN_GENERIC
 	depot_stack_handle_t aux_stack[2];
 #else
-	struct kasan_track free_track[KASAN_NR_FREE_STACKS];
-#endif
-#ifdef CONFIG_KASAN_TAGS_IDENTIFY
-	u8 free_pointer_tag[KASAN_NR_FREE_STACKS];
-	u8 free_track_idx;
+	struct kasan_track free_track;
 #endif
 };
 
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index e25d2166e813..35cf3cae4aa4 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -5,37 +5,9 @@
  */
 
 #include "kasan.h"
-#include "../slab.h"
 
 const char *kasan_get_bug_type(struct kasan_report_info *info)
 {
-#ifdef CONFIG_KASAN_TAGS_IDENTIFY
-	struct kasan_alloc_meta *alloc_meta;
-	struct kmem_cache *cache;
-	struct slab *slab;
-	const void *addr;
-	void *object;
-	u8 tag;
-	int i;
-
-	tag = get_tag(info->access_addr);
-	addr = kasan_reset_tag(info->access_addr);
-	slab = kasan_addr_to_slab(addr);
-	if (slab) {
-		cache = slab->slab_cache;
-		object = nearest_obj(cache, slab, (void *)addr);
-		alloc_meta = kasan_get_alloc_meta(cache, object);
-
-		if (alloc_meta) {
-			for (i = 0; i < KASAN_NR_FREE_STACKS; i++) {
-				if (alloc_meta->free_pointer_tag[i] == tag)
-					return "use-after-free";
-			}
-		}
-		return "out-of-bounds";
-	}
-#endif
-
 	/*
 	 * If access_size is a negative number, then it has reason to be
 	 * defined as out-of-bounds bug type.
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 1ba3c8399f72..e0e5de8ce834 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -30,39 +30,22 @@ void kasan_save_free_info(struct kmem_cache *cache,
 				void *object, u8 tag)
 {
 	struct kasan_alloc_meta *alloc_meta;
-	u8 idx = 0;
 
 	alloc_meta = kasan_get_alloc_meta(cache, object);
 	if (!alloc_meta)
 		return;
 
-#ifdef CONFIG_KASAN_TAGS_IDENTIFY
-	idx = alloc_meta->free_track_idx;
-	alloc_meta->free_pointer_tag[idx] = tag;
-	alloc_meta->free_track_idx = (idx + 1) % KASAN_NR_FREE_STACKS;
-#endif
-
-	kasan_set_track(&alloc_meta->free_track[idx], GFP_NOWAIT);
+	kasan_set_track(&alloc_meta->free_track, GFP_NOWAIT);
 }
 
 struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
 				void *object, u8 tag)
 {
 	struct kasan_alloc_meta *alloc_meta;
-	int i = 0;
 
 	alloc_meta = kasan_get_alloc_meta(cache, object);
 	if (!alloc_meta)
 		return NULL;
 
-#ifdef CONFIG_KASAN_TAGS_IDENTIFY
-	for (i = 0; i < KASAN_NR_FREE_STACKS; i++) {
-		if (alloc_meta->free_pointer_tag[i] == tag)
-			break;
-	}
-	if (i == KASAN_NR_FREE_STACKS)
-		i = alloc_meta->free_track_idx;
-#endif
-
-	return &alloc_meta->free_track[i];
+	return &alloc_meta->free_track;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 06/32] kasan: introduce kasan_print_aux_stacks
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (4 preceding siblings ...)
  2022-06-13 20:13 ` [PATCH 05/32] kasan: drop CONFIG_KASAN_TAGS_IDENTIFY andrey.konovalov
@ 2022-06-13 20:13 ` andrey.konovalov
  2022-06-17 11:34   ` Marco Elver
  2022-06-13 20:13 ` [PATCH 07/32] kasan: introduce kasan_get_alloc_track andrey.konovalov
                   ` (26 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:13 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Add a kasan_print_aux_stacks() helper that prints the auxiliary stack
traces for the Generic mode.

This change hides references to alloc_meta from the common reporting code.
This is desired as only the Generic mode will be using per-object metadata
after this series.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/kasan.h          |  6 ++++++
 mm/kasan/report.c         | 15 +--------------
 mm/kasan/report_generic.c | 20 ++++++++++++++++++++
 3 files changed, 27 insertions(+), 14 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index aa6b43936f8d..bcea5ed15631 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -265,6 +265,12 @@ void kasan_print_address_stack_frame(const void *addr);
 static inline void kasan_print_address_stack_frame(const void *addr) { }
 #endif
 
+#ifdef CONFIG_KASAN_GENERIC
+void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object);
+#else
+static inline void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object) { }
+#endif
+
 bool kasan_report(unsigned long addr, size_t size,
 		bool is_write, unsigned long ip);
 void kasan_report_invalid_free(void *object, unsigned long ip);
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index b341a191651d..35dd8aeb115c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -266,20 +266,7 @@ static void describe_object_stacks(struct kmem_cache *cache, void *object,
 		pr_err("\n");
 	}
 
-#ifdef CONFIG_KASAN_GENERIC
-	if (!alloc_meta)
-		return;
-	if (alloc_meta->aux_stack[0]) {
-		pr_err("Last potentially related work creation:\n");
-		stack_depot_print(alloc_meta->aux_stack[0]);
-		pr_err("\n");
-	}
-	if (alloc_meta->aux_stack[1]) {
-		pr_err("Second to last potentially related work creation:\n");
-		stack_depot_print(alloc_meta->aux_stack[1]);
-		pr_err("\n");
-	}
-#endif
+	kasan_print_aux_stacks(cache, object);
 }
 
 static void describe_object(struct kmem_cache *cache, void *object,
diff --git a/mm/kasan/report_generic.c b/mm/kasan/report_generic.c
index 6689fb9a919b..348dc207d462 100644
--- a/mm/kasan/report_generic.c
+++ b/mm/kasan/report_generic.c
@@ -132,6 +132,26 @@ void kasan_metadata_fetch_row(char *buffer, void *row)
 	memcpy(buffer, kasan_mem_to_shadow(row), META_BYTES_PER_ROW);
 }
 
+void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object)
+{
+	struct kasan_alloc_meta *alloc_meta;
+
+	alloc_meta = kasan_get_alloc_meta(cache, object);
+	if (!alloc_meta)
+		return;
+
+	if (alloc_meta->aux_stack[0]) {
+		pr_err("Last potentially related work creation:\n");
+		stack_depot_print(alloc_meta->aux_stack[0]);
+		pr_err("\n");
+	}
+	if (alloc_meta->aux_stack[1]) {
+		pr_err("Second to last potentially related work creation:\n");
+		stack_depot_print(alloc_meta->aux_stack[1]);
+		pr_err("\n");
+	}
+}
+
 #ifdef CONFIG_KASAN_STACK
 static bool __must_check tokenize_frame_descr(const char **frame_descr,
 					      char *token, size_t max_tok_len,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 07/32] kasan: introduce kasan_get_alloc_track
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (5 preceding siblings ...)
  2022-06-13 20:13 ` [PATCH 06/32] kasan: introduce kasan_print_aux_stacks andrey.konovalov
@ 2022-06-13 20:13 ` andrey.konovalov
  2022-06-20 13:39   ` Marco Elver
  2022-06-13 20:13 ` [PATCH 08/32] kasan: introduce kasan_init_object_meta andrey.konovalov
                   ` (25 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:13 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Add a kasan_get_alloc_track() helper that fetches alloc_track for a slab
object and use this helper in the common reporting code.

For now, the implementations of this helper are the same for the Generic
and tag-based modes, but they will diverge later in the series.

This change hides references to alloc_meta from the common reporting code.
This is desired as only the Generic mode will be using per-object metadata
after this series.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/generic.c | 14 +++++++++++++-
 mm/kasan/kasan.h   |  4 +++-
 mm/kasan/report.c  |  8 ++++----
 mm/kasan/tags.c    | 14 +++++++++++++-
 4 files changed, 33 insertions(+), 7 deletions(-)

diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 98c451a3b01f..f212b9ae57b5 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -381,8 +381,20 @@ void kasan_save_free_info(struct kmem_cache *cache,
 	*(u8 *)kasan_mem_to_shadow(object) = KASAN_SLAB_FREETRACK;
 }
 
+struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
+						void *object)
+{
+	struct kasan_alloc_meta *alloc_meta;
+
+	alloc_meta = kasan_get_alloc_meta(cache, object);
+	if (!alloc_meta)
+		return NULL;
+
+	return &alloc_meta->alloc_track;
+}
+
 struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
-				void *object, u8 tag)
+						void *object, u8 tag)
 {
 	if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_SLAB_FREETRACK)
 		return NULL;
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index bcea5ed15631..4005da62a1e1 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -282,8 +282,10 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
 void kasan_set_track(struct kasan_track *track, gfp_t flags);
 void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
 void kasan_save_free_info(struct kmem_cache *cache, void *object, u8 tag);
+struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
+						void *object);
 struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
-				void *object, u8 tag);
+						void *object, u8 tag);
 
 #if defined(CONFIG_KASAN_GENERIC) && \
 	(defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 35dd8aeb115c..f951fd39db74 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -251,12 +251,12 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
 static void describe_object_stacks(struct kmem_cache *cache, void *object,
 					const void *addr, u8 tag)
 {
-	struct kasan_alloc_meta *alloc_meta;
+	struct kasan_track *alloc_track;
 	struct kasan_track *free_track;
 
-	alloc_meta = kasan_get_alloc_meta(cache, object);
-	if (alloc_meta) {
-		print_track(&alloc_meta->alloc_track, "Allocated");
+	alloc_track = kasan_get_alloc_track(cache, object);
+	if (alloc_track) {
+		print_track(alloc_track, "Allocated");
 		pr_err("\n");
 	}
 
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index e0e5de8ce834..7b1fc8e7c99c 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -38,8 +38,20 @@ void kasan_save_free_info(struct kmem_cache *cache,
 	kasan_set_track(&alloc_meta->free_track, GFP_NOWAIT);
 }
 
+struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
+						void *object)
+{
+	struct kasan_alloc_meta *alloc_meta;
+
+	alloc_meta = kasan_get_alloc_meta(cache, object);
+	if (!alloc_meta)
+		return NULL;
+
+	return &alloc_meta->alloc_track;
+}
+
 struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
-				void *object, u8 tag)
+						void *object, u8 tag)
 {
 	struct kasan_alloc_meta *alloc_meta;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 08/32] kasan: introduce kasan_init_object_meta
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (6 preceding siblings ...)
  2022-06-13 20:13 ` [PATCH 07/32] kasan: introduce kasan_get_alloc_track andrey.konovalov
@ 2022-06-13 20:13 ` andrey.konovalov
  2022-06-20 13:39   ` Marco Elver
  2022-06-13 20:14 ` [PATCH 09/32] kasan: clear metadata functions for tag-based modes andrey.konovalov
                   ` (24 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:13 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Add a kasan_init_object_meta() helper that initializes metadata for a slab
object and use it in the common code.

For now, the implementations of this helper are the same for the Generic
and tag-based modes, but they will diverge later in the series.

This change hides references to alloc_meta from the common code. This is
desired as only the Generic mode will be using per-object metadata after
this series.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/common.c  | 10 +++-------
 mm/kasan/generic.c |  9 +++++++++
 mm/kasan/kasan.h   |  2 ++
 mm/kasan/tags.c    |  9 +++++++++
 4 files changed, 23 insertions(+), 7 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 2848c7a2402a..f0ee1c1b4b3c 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -312,13 +312,9 @@ static inline u8 assign_tag(struct kmem_cache *cache,
 void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
 						const void *object)
 {
-	struct kasan_alloc_meta *alloc_meta;
-
-	if (kasan_stack_collection_enabled()) {
-		alloc_meta = kasan_get_alloc_meta(cache, object);
-		if (alloc_meta)
-			__memset(alloc_meta, 0, sizeof(*alloc_meta));
-	}
+	/* Initialize per-object metadata if it is present. */
+	if (kasan_stack_collection_enabled())
+		kasan_init_object_meta(cache, object);
 
 	/* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */
 	object = set_tag(object, assign_tag(cache, object, true));
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index f212b9ae57b5..5462ddbc21e6 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -328,6 +328,15 @@ DEFINE_ASAN_SET_SHADOW(f3);
 DEFINE_ASAN_SET_SHADOW(f5);
 DEFINE_ASAN_SET_SHADOW(f8);
 
+void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
+{
+	struct kasan_alloc_meta *alloc_meta;
+
+	alloc_meta = kasan_get_alloc_meta(cache, object);
+	if (alloc_meta)
+		__memset(alloc_meta, 0, sizeof(*alloc_meta));
+}
+
 static void __kasan_record_aux_stack(void *addr, bool can_alloc)
 {
 	struct slab *slab = kasan_addr_to_slab(addr);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 4005da62a1e1..751c3b17749a 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -278,6 +278,8 @@ void kasan_report_invalid_free(void *object, unsigned long ip);
 struct page *kasan_addr_to_page(const void *addr);
 struct slab *kasan_addr_to_slab(const void *addr);
 
+void kasan_init_object_meta(struct kmem_cache *cache, const void *object);
+
 depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
 void kasan_set_track(struct kasan_track *track, gfp_t flags);
 void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 7b1fc8e7c99c..2e200969a4b8 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -17,6 +17,15 @@
 
 #include "kasan.h"
 
+void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
+{
+	struct kasan_alloc_meta *alloc_meta;
+
+	alloc_meta = kasan_get_alloc_meta(cache, object);
+	if (alloc_meta)
+		__memset(alloc_meta, 0, sizeof(*alloc_meta));
+}
+
 void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
 {
 	struct kasan_alloc_meta *alloc_meta;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 09/32] kasan: clear metadata functions for tag-based modes
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (7 preceding siblings ...)
  2022-06-13 20:13 ` [PATCH 08/32] kasan: introduce kasan_init_object_meta andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-20 13:39   ` Marco Elver
  2022-06-13 20:14 ` [PATCH 10/32] kasan: move kasan_get_*_meta to generic.c andrey.konovalov
                   ` (23 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Remove implementations of the metadata-related functions for the tag-based
modes.

The following patches in the series will provide alternative
implementations.

As of this patch, the tag-based modes no longer collect alloc and free
stack traces. This functionality will be restored later in the series.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/tags.c | 33 ++-------------------------------
 1 file changed, 2 insertions(+), 31 deletions(-)

diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 2e200969a4b8..f11c89505c77 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -19,54 +19,25 @@
 
 void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
 {
-	struct kasan_alloc_meta *alloc_meta;
-
-	alloc_meta = kasan_get_alloc_meta(cache, object);
-	if (alloc_meta)
-		__memset(alloc_meta, 0, sizeof(*alloc_meta));
 }
 
 void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
 {
-	struct kasan_alloc_meta *alloc_meta;
-
-	alloc_meta = kasan_get_alloc_meta(cache, object);
-	if (alloc_meta)
-		kasan_set_track(&alloc_meta->alloc_track, flags);
 }
 
 void kasan_save_free_info(struct kmem_cache *cache,
 				void *object, u8 tag)
 {
-	struct kasan_alloc_meta *alloc_meta;
-
-	alloc_meta = kasan_get_alloc_meta(cache, object);
-	if (!alloc_meta)
-		return;
-
-	kasan_set_track(&alloc_meta->free_track, GFP_NOWAIT);
 }
 
 struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
 						void *object)
 {
-	struct kasan_alloc_meta *alloc_meta;
-
-	alloc_meta = kasan_get_alloc_meta(cache, object);
-	if (!alloc_meta)
-		return NULL;
-
-	return &alloc_meta->alloc_track;
+	return NULL;
 }
 
 struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
 						void *object, u8 tag)
 {
-	struct kasan_alloc_meta *alloc_meta;
-
-	alloc_meta = kasan_get_alloc_meta(cache, object);
-	if (!alloc_meta)
-		return NULL;
-
-	return &alloc_meta->free_track;
+	return NULL;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 10/32] kasan: move kasan_get_*_meta to generic.c
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (8 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 09/32] kasan: clear metadata functions for tag-based modes andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 11/32] kasan: introduce kasan_requires_meta andrey.konovalov
                   ` (22 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Move the implementations of kasan_get_alloc/free_meta() to generic.c,
as the common KASAN code does not use these functions anymore.

Also drop kasan_reset_tag() from the implementation, as the Generic
mode does not tag pointers.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/common.c  | 19 -------------------
 mm/kasan/generic.c | 17 +++++++++++++++++
 mm/kasan/kasan.h   | 14 +++++++-------
 3 files changed, 24 insertions(+), 26 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index f0ee1c1b4b3c..226eaa714da2 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -228,25 +228,6 @@ size_t __kasan_metadata_size(struct kmem_cache *cache)
 		 sizeof(struct kasan_free_meta) : 0);
 }
 
-struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
-					      const void *object)
-{
-	if (!cache->kasan_info.alloc_meta_offset)
-		return NULL;
-	return kasan_reset_tag(object) + cache->kasan_info.alloc_meta_offset;
-}
-
-#ifdef CONFIG_KASAN_GENERIC
-struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
-					    const void *object)
-{
-	BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
-	if (cache->kasan_info.free_meta_offset == KASAN_NO_FREE_META)
-		return NULL;
-	return kasan_reset_tag(object) + cache->kasan_info.free_meta_offset;
-}
-#endif
-
 void __kasan_poison_slab(struct slab *slab)
 {
 	struct page *page = slab_page(slab);
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 5462ddbc21e6..fa654cb96a0d 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -328,6 +328,23 @@ DEFINE_ASAN_SET_SHADOW(f3);
 DEFINE_ASAN_SET_SHADOW(f5);
 DEFINE_ASAN_SET_SHADOW(f8);
 
+struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
+					      const void *object)
+{
+	if (!cache->kasan_info.alloc_meta_offset)
+		return NULL;
+	return (void *)object + cache->kasan_info.alloc_meta_offset;
+}
+
+struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
+					    const void *object)
+{
+	BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
+	if (cache->kasan_info.free_meta_offset == KASAN_NO_FREE_META)
+		return NULL;
+	return (void *)object + cache->kasan_info.free_meta_offset;
+}
+
 void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
 {
 	struct kasan_alloc_meta *alloc_meta;
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 751c3b17749a..ff7a1597aa51 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -208,13 +208,6 @@ struct kunit_kasan_status {
 };
 #endif
 
-struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
-						const void *object);
-#ifdef CONFIG_KASAN_GENERIC
-struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
-						const void *object);
-#endif
-
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 
 static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
@@ -280,6 +273,13 @@ struct slab *kasan_addr_to_slab(const void *addr);
 
 void kasan_init_object_meta(struct kmem_cache *cache, const void *object);
 
+#ifdef CONFIG_KASAN_GENERIC
+struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
+						const void *object);
+struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
+						const void *object);
+#endif
+
 depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
 void kasan_set_track(struct kasan_track *track, gfp_t flags);
 void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 11/32] kasan: introduce kasan_requires_meta
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (9 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 10/32] kasan: move kasan_get_*_meta to generic.c andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 12/32] kasan: introduce kasan_init_cache_meta andrey.konovalov
                   ` (21 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Add a kasan_requires_meta() helper that indicates whether the enabled
KASAN mode requires per-object metadata and use this helper in the common
code.

Also hide kasan_init_object_meta() under CONFIG_KASAN_GENERIC ifdef check,
as Generic is the only mode that uses per-object metadata.

To allow for a potential future change that makes Generic KASAN support
the kasan.stacktrace command-line parameter, let kasan_requires_meta()
return kasan_stack_collection_enabled() instead of simply returning true.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/common.c | 13 +++++--------
 mm/kasan/kasan.h  | 33 +++++++++++++++++++++++++++++----
 mm/kasan/tags.c   |  4 ----
 3 files changed, 34 insertions(+), 16 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 226eaa714da2..a3dee7cead89 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -88,13 +88,10 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
 }
 #endif /* CONFIG_KASAN_STACK */
 
-/*
- * Only allow cache merging when stack collection is disabled and no metadata
- * is present.
- */
+/* Only allow cache merging when no per-object metadata is present. */
 slab_flags_t __kasan_never_merge(void)
 {
-	if (kasan_stack_collection_enabled())
+	if (kasan_requires_meta())
 		return SLAB_KASAN;
 	return 0;
 }
@@ -151,7 +148,7 @@ void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
 	 */
 	*flags |= SLAB_KASAN;
 
-	if (!kasan_stack_collection_enabled())
+	if (!kasan_requires_meta())
 		return;
 
 	ok_size = *size;
@@ -219,7 +216,7 @@ void __kasan_cache_create_kmalloc(struct kmem_cache *cache)
 
 size_t __kasan_metadata_size(struct kmem_cache *cache)
 {
-	if (!kasan_stack_collection_enabled())
+	if (!kasan_requires_meta())
 		return 0;
 	return (cache->kasan_info.alloc_meta_offset ?
 		sizeof(struct kasan_alloc_meta) : 0) +
@@ -294,7 +291,7 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
 						const void *object)
 {
 	/* Initialize per-object metadata if it is present. */
-	if (kasan_stack_collection_enabled())
+	if (kasan_requires_meta())
 		kasan_init_object_meta(cache, object);
 
 	/* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index ff7a1597aa51..cf123d99f2fe 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -43,7 +43,7 @@ static inline bool kasan_sync_fault_possible(void)
 	return kasan_mode == KASAN_MODE_SYNC || kasan_mode == KASAN_MODE_ASYMM;
 }
 
-#else
+#else /* CONFIG_KASAN_HW_TAGS */
 
 static inline bool kasan_stack_collection_enabled(void)
 {
@@ -60,7 +60,31 @@ static inline bool kasan_sync_fault_possible(void)
 	return true;
 }
 
-#endif
+#endif /* CONFIG_KASAN_HW_TAGS */
+
+#ifdef CONFIG_KASAN_GENERIC
+
+/* Generic KASAN uses per-object metadata to store stack traces. */
+static inline bool kasan_requires_meta(void)
+{
+	/*
+	 * Technically, Generic KASAN always collects stack traces right now.
+	 * However, let's use kasan_stack_collection_enabled() in case the
+	 * kasan.stacktrace command-line argument is changed to affect
+	 * Generic KASAN.
+	 */
+	return kasan_stack_collection_enabled();
+}
+
+#else /* CONFIG_KASAN_GENERIC */
+
+/* Tag-based KASAN modes do not use per-object metadata. */
+static inline bool kasan_requires_meta(void)
+{
+	return false;
+}
+
+#endif /* CONFIG_KASAN_GENERIC */
 
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 #define KASAN_GRANULE_SIZE	(1UL << KASAN_SHADOW_SCALE_SHIFT)
@@ -271,13 +295,14 @@ void kasan_report_invalid_free(void *object, unsigned long ip);
 struct page *kasan_addr_to_page(const void *addr);
 struct slab *kasan_addr_to_slab(const void *addr);
 
-void kasan_init_object_meta(struct kmem_cache *cache, const void *object);
-
 #ifdef CONFIG_KASAN_GENERIC
+void kasan_init_object_meta(struct kmem_cache *cache, const void *object);
 struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
 						const void *object);
 struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
 						const void *object);
+#else
+static inline void kasan_init_object_meta(struct kmem_cache *cache, const void *object) { }
 #endif
 
 depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index f11c89505c77..4f24669085e9 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -17,10 +17,6 @@
 
 #include "kasan.h"
 
-void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
-{
-}
-
 void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
 {
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 12/32] kasan: introduce kasan_init_cache_meta
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (10 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 11/32] kasan: introduce kasan_requires_meta andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 13/32] kasan: drop CONFIG_KASAN_GENERIC check from kasan_init_cache_meta andrey.konovalov
                   ` (20 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Add a kasan_init_cache_meta() helper that initializes metadata-related
cache parameters and use this helper in the common KASAN code.

Put the implementation of this new helper into generic.c, as only the
Generic mode uses per-object metadata.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/common.c  | 80 ++--------------------------------------------
 mm/kasan/generic.c | 79 +++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h   |  2 ++
 3 files changed, 83 insertions(+), 78 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a3dee7cead89..8a83ca9ad738 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -117,28 +117,9 @@ void __kasan_poison_pages(struct page *page, unsigned int order, bool init)
 			     KASAN_PAGE_FREE, init);
 }
 
-/*
- * Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
- * For larger allocations larger redzones are used.
- */
-static inline unsigned int optimal_redzone(unsigned int object_size)
-{
-	return
-		object_size <= 64        - 16   ? 16 :
-		object_size <= 128       - 32   ? 32 :
-		object_size <= 512       - 64   ? 64 :
-		object_size <= 4096      - 128  ? 128 :
-		object_size <= (1 << 14) - 256  ? 256 :
-		object_size <= (1 << 15) - 512  ? 512 :
-		object_size <= (1 << 16) - 1024 ? 1024 : 2048;
-}
-
 void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
 			  slab_flags_t *flags)
 {
-	unsigned int ok_size;
-	unsigned int optimal_size;
-
 	/*
 	 * SLAB_KASAN is used to mark caches as ones that are sanitized by
 	 * KASAN. Currently this flag is used in two places:
@@ -148,65 +129,8 @@ void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
 	 */
 	*flags |= SLAB_KASAN;
 
-	if (!kasan_requires_meta())
-		return;
-
-	ok_size = *size;
-
-	/* Add alloc meta into redzone. */
-	cache->kasan_info.alloc_meta_offset = *size;
-	*size += sizeof(struct kasan_alloc_meta);
-
-	/*
-	 * If alloc meta doesn't fit, don't add it.
-	 * This can only happen with SLAB, as it has KMALLOC_MAX_SIZE equal
-	 * to KMALLOC_MAX_CACHE_SIZE and doesn't fall back to page_alloc for
-	 * larger sizes.
-	 */
-	if (*size > KMALLOC_MAX_SIZE) {
-		cache->kasan_info.alloc_meta_offset = 0;
-		*size = ok_size;
-		/* Continue, since free meta might still fit. */
-	}
-
-	/* Only the generic mode uses free meta or flexible redzones. */
-	if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
-		cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
-		return;
-	}
-
-	/*
-	 * Add free meta into redzone when it's not possible to store
-	 * it in the object. This is the case when:
-	 * 1. Object is SLAB_TYPESAFE_BY_RCU, which means that it can
-	 *    be touched after it was freed, or
-	 * 2. Object has a constructor, which means it's expected to
-	 *    retain its content until the next allocation, or
-	 * 3. Object is too small.
-	 * Otherwise cache->kasan_info.free_meta_offset = 0 is implied.
-	 */
-	if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor ||
-	    cache->object_size < sizeof(struct kasan_free_meta)) {
-		ok_size = *size;
-
-		cache->kasan_info.free_meta_offset = *size;
-		*size += sizeof(struct kasan_free_meta);
-
-		/* If free meta doesn't fit, don't add it. */
-		if (*size > KMALLOC_MAX_SIZE) {
-			cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
-			*size = ok_size;
-		}
-	}
-
-	/* Calculate size with optimal redzone. */
-	optimal_size = cache->object_size + optimal_redzone(cache->object_size);
-	/* Limit it with KMALLOC_MAX_SIZE (relevant for SLAB only). */
-	if (optimal_size > KMALLOC_MAX_SIZE)
-		optimal_size = KMALLOC_MAX_SIZE;
-	/* Use optimal size if the size with added metas is not large enough. */
-	if (*size < optimal_size)
-		*size = optimal_size;
+	if (kasan_requires_meta())
+		kasan_init_cache_meta(cache, size);
 }
 
 void __kasan_cache_create_kmalloc(struct kmem_cache *cache)
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index fa654cb96a0d..73aea784040a 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -328,6 +328,85 @@ DEFINE_ASAN_SET_SHADOW(f3);
 DEFINE_ASAN_SET_SHADOW(f5);
 DEFINE_ASAN_SET_SHADOW(f8);
 
+/*
+ * Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
+ * For larger allocations larger redzones are used.
+ */
+static inline unsigned int optimal_redzone(unsigned int object_size)
+{
+	return
+		object_size <= 64        - 16   ? 16 :
+		object_size <= 128       - 32   ? 32 :
+		object_size <= 512       - 64   ? 64 :
+		object_size <= 4096      - 128  ? 128 :
+		object_size <= (1 << 14) - 256  ? 256 :
+		object_size <= (1 << 15) - 512  ? 512 :
+		object_size <= (1 << 16) - 1024 ? 1024 : 2048;
+}
+
+void kasan_init_cache_meta(struct kmem_cache *cache, unsigned int *size)
+{
+	unsigned int ok_size;
+	unsigned int optimal_size;
+
+	ok_size = *size;
+
+	/* Add alloc meta into redzone. */
+	cache->kasan_info.alloc_meta_offset = *size;
+	*size += sizeof(struct kasan_alloc_meta);
+
+	/*
+	 * If alloc meta doesn't fit, don't add it.
+	 * This can only happen with SLAB, as it has KMALLOC_MAX_SIZE equal
+	 * to KMALLOC_MAX_CACHE_SIZE and doesn't fall back to page_alloc for
+	 * larger sizes.
+	 */
+	if (*size > KMALLOC_MAX_SIZE) {
+		cache->kasan_info.alloc_meta_offset = 0;
+		*size = ok_size;
+		/* Continue, since free meta might still fit. */
+	}
+
+	/* Only the generic mode uses free meta or flexible redzones. */
+	if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+		cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
+		return;
+	}
+
+	/*
+	 * Add free meta into redzone when it's not possible to store
+	 * it in the object. This is the case when:
+	 * 1. Object is SLAB_TYPESAFE_BY_RCU, which means that it can
+	 *    be touched after it was freed, or
+	 * 2. Object has a constructor, which means it's expected to
+	 *    retain its content until the next allocation, or
+	 * 3. Object is too small.
+	 * Otherwise cache->kasan_info.free_meta_offset = 0 is implied.
+	 */
+	if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor ||
+	    cache->object_size < sizeof(struct kasan_free_meta)) {
+		ok_size = *size;
+
+		cache->kasan_info.free_meta_offset = *size;
+		*size += sizeof(struct kasan_free_meta);
+
+		/* If free meta doesn't fit, don't add it. */
+		if (*size > KMALLOC_MAX_SIZE) {
+			cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
+			*size = ok_size;
+		}
+	}
+
+	/* Calculate size with optimal redzone. */
+	optimal_size = cache->object_size + optimal_redzone(cache->object_size);
+	/* Limit it with KMALLOC_MAX_SIZE (relevant for SLAB only). */
+	if (optimal_size > KMALLOC_MAX_SIZE)
+		optimal_size = KMALLOC_MAX_SIZE;
+	/* Use optimal size if the size with added metas is not large enough. */
+	if (*size < optimal_size)
+		*size = optimal_size;
+}
+
 struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
 					      const void *object)
 {
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index cf123d99f2fe..ab2cd3ff10f3 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -296,12 +296,14 @@ struct page *kasan_addr_to_page(const void *addr);
 struct slab *kasan_addr_to_slab(const void *addr);
 
 #ifdef CONFIG_KASAN_GENERIC
+void kasan_init_cache_meta(struct kmem_cache *cache, unsigned int *size);
 void kasan_init_object_meta(struct kmem_cache *cache, const void *object);
 struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
 						const void *object);
 struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
 						const void *object);
 #else
+static inline void kasan_init_cache_meta(struct kmem_cache *cache, unsigned int *size) { }
 static inline void kasan_init_object_meta(struct kmem_cache *cache, const void *object) { }
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 13/32] kasan: drop CONFIG_KASAN_GENERIC check from kasan_init_cache_meta
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (11 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 12/32] kasan: introduce kasan_init_cache_meta andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 14/32] kasan: only define kasan_metadata_size for Generic mode andrey.konovalov
                   ` (19 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

As kasan_init_cache_meta() is only defined for the Generic mode, it does
not require the CONFIG_KASAN_GENERIC check.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/generic.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 73aea784040a..5125fad76f70 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -367,12 +367,6 @@ void kasan_init_cache_meta(struct kmem_cache *cache, unsigned int *size)
 		/* Continue, since free meta might still fit. */
 	}
 
-	/* Only the generic mode uses free meta or flexible redzones. */
-	if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
-		cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
-		return;
-	}
-
 	/*
 	 * Add free meta into redzone when it's not possible to store
 	 * it in the object. This is the case when:
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 14/32] kasan: only define kasan_metadata_size for Generic mode
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (12 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 13/32] kasan: drop CONFIG_KASAN_GENERIC check from kasan_init_cache_meta andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 15/32] kasan: only define kasan_never_merge " andrey.konovalov
                   ` (18 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

KASAN provides a helper for calculating the size of per-object metadata
stored in the redzone.

As now only the Generic mode uses per-object metadata, only define
kasan_metadata_size() for this mode.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/kasan.h | 17 ++++++++---------
 mm/kasan/common.c     | 11 -----------
 mm/kasan/generic.c    | 11 +++++++++++
 3 files changed, 19 insertions(+), 20 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index b092277bf48d..027df7599573 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -150,14 +150,6 @@ static __always_inline void kasan_cache_create_kmalloc(struct kmem_cache *cache)
 		__kasan_cache_create_kmalloc(cache);
 }
 
-size_t __kasan_metadata_size(struct kmem_cache *cache);
-static __always_inline size_t kasan_metadata_size(struct kmem_cache *cache)
-{
-	if (kasan_enabled())
-		return __kasan_metadata_size(cache);
-	return 0;
-}
-
 void __kasan_poison_slab(struct slab *slab);
 static __always_inline void kasan_poison_slab(struct slab *slab)
 {
@@ -282,7 +274,6 @@ static inline void kasan_cache_create(struct kmem_cache *cache,
 				      unsigned int *size,
 				      slab_flags_t *flags) {}
 static inline void kasan_cache_create_kmalloc(struct kmem_cache *cache) {}
-static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
 static inline void kasan_poison_slab(struct slab *slab) {}
 static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
 					void *object) {}
@@ -333,6 +324,8 @@ static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
 
 #ifdef CONFIG_KASAN_GENERIC
 
+size_t kasan_metadata_size(struct kmem_cache *cache);
+
 void kasan_cache_shrink(struct kmem_cache *cache);
 void kasan_cache_shutdown(struct kmem_cache *cache);
 void kasan_record_aux_stack(void *ptr);
@@ -340,6 +333,12 @@ void kasan_record_aux_stack_noalloc(void *ptr);
 
 #else /* CONFIG_KASAN_GENERIC */
 
+/* Tag-based KASAN modes do not use per-object metadata. */
+static inline size_t kasan_metadata_size(struct kmem_cache *cache)
+{
+	return 0;
+}
+
 static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
 static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
 static inline void kasan_record_aux_stack(void *ptr) {}
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 8a83ca9ad738..a0ddbf02aa6d 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -138,17 +138,6 @@ void __kasan_cache_create_kmalloc(struct kmem_cache *cache)
 	cache->kasan_info.is_kmalloc = true;
 }
 
-size_t __kasan_metadata_size(struct kmem_cache *cache)
-{
-	if (!kasan_requires_meta())
-		return 0;
-	return (cache->kasan_info.alloc_meta_offset ?
-		sizeof(struct kasan_alloc_meta) : 0) +
-		((cache->kasan_info.free_meta_offset &&
-		  cache->kasan_info.free_meta_offset != KASAN_NO_FREE_META) ?
-		 sizeof(struct kasan_free_meta) : 0);
-}
-
 void __kasan_poison_slab(struct slab *slab)
 {
 	struct page *page = slab_page(slab);
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 5125fad76f70..806ab92032c3 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -427,6 +427,17 @@ void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
 		__memset(alloc_meta, 0, sizeof(*alloc_meta));
 }
 
+size_t kasan_metadata_size(struct kmem_cache *cache)
+{
+	if (!kasan_requires_meta())
+		return 0;
+	return (cache->kasan_info.alloc_meta_offset ?
+		sizeof(struct kasan_alloc_meta) : 0) +
+		((cache->kasan_info.free_meta_offset &&
+		  cache->kasan_info.free_meta_offset != KASAN_NO_FREE_META) ?
+		 sizeof(struct kasan_free_meta) : 0);
+}
+
 static void __kasan_record_aux_stack(void *addr, bool can_alloc)
 {
 	struct slab *slab = kasan_addr_to_slab(addr);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 15/32] kasan: only define kasan_never_merge for Generic mode
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (13 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 14/32] kasan: only define kasan_metadata_size for Generic mode andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 16/32] kasan: only define metadata offsets " andrey.konovalov
                   ` (17 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

KASAN prevents merging of slab caches whose objects have per-object
metadata stored in redzones.

As now only the Generic mode uses per-object metadata, define
kasan_never_merge() only for this mode.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/kasan.h | 18 ++++++------------
 mm/kasan/common.c     |  8 --------
 mm/kasan/generic.c    |  8 ++++++++
 3 files changed, 14 insertions(+), 20 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 027df7599573..9743d4b3a918 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -103,14 +103,6 @@ struct kasan_cache {
 	bool is_kmalloc;
 };
 
-slab_flags_t __kasan_never_merge(void);
-static __always_inline slab_flags_t kasan_never_merge(void)
-{
-	if (kasan_enabled())
-		return __kasan_never_merge();
-	return 0;
-}
-
 void __kasan_unpoison_range(const void *addr, size_t size);
 static __always_inline void kasan_unpoison_range(const void *addr, size_t size)
 {
@@ -261,10 +253,6 @@ static __always_inline bool kasan_check_byte(const void *addr)
 
 #else /* CONFIG_KASAN */
 
-static inline slab_flags_t kasan_never_merge(void)
-{
-	return 0;
-}
 static inline void kasan_unpoison_range(const void *address, size_t size) {}
 static inline void kasan_poison_pages(struct page *page, unsigned int order,
 				      bool init) {}
@@ -325,6 +313,7 @@ static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
 #ifdef CONFIG_KASAN_GENERIC
 
 size_t kasan_metadata_size(struct kmem_cache *cache);
+slab_flags_t kasan_never_merge(void);
 
 void kasan_cache_shrink(struct kmem_cache *cache);
 void kasan_cache_shutdown(struct kmem_cache *cache);
@@ -338,6 +327,11 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache)
 {
 	return 0;
 }
+/* And thus nothing prevents cache merging. */
+static inline slab_flags_t kasan_never_merge(void)
+{
+	return 0;
+}
 
 static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
 static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a0ddbf02aa6d..f8ef40fa31e3 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -88,14 +88,6 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
 }
 #endif /* CONFIG_KASAN_STACK */
 
-/* Only allow cache merging when no per-object metadata is present. */
-slab_flags_t __kasan_never_merge(void)
-{
-	if (kasan_requires_meta())
-		return SLAB_KASAN;
-	return 0;
-}
-
 void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init)
 {
 	u8 tag;
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 806ab92032c3..25333bf3c99f 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -328,6 +328,14 @@ DEFINE_ASAN_SET_SHADOW(f3);
 DEFINE_ASAN_SET_SHADOW(f5);
 DEFINE_ASAN_SET_SHADOW(f8);
 
+/* Only allow cache merging when no per-object metadata is present. */
+slab_flags_t kasan_never_merge(void)
+{
+	if (!kasan_requires_meta())
+		return 0;
+	return SLAB_KASAN;
+}
+
 /*
  * Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
  * For larger allocations larger redzones are used.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 16/32] kasan: only define metadata offsets for Generic mode
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (14 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 15/32] kasan: only define kasan_never_merge " andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 17/32] kasan: only define metadata structs " andrey.konovalov
                   ` (16 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Hide the definitions of alloc_meta_offset and free_meta_offset under
an ifdef CONFIG_KASAN_GENERIC check, as these fields are now only used
when the Generic mode is enabled.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/kasan.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9743d4b3a918..a212c2e3f32d 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -98,8 +98,10 @@ static inline bool kasan_has_integrated_init(void)
 #ifdef CONFIG_KASAN
 
 struct kasan_cache {
+#ifdef CONFIG_KASAN_GENERIC
 	int alloc_meta_offset;
 	int free_meta_offset;
+#endif
 	bool is_kmalloc;
 };
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 17/32] kasan: only define metadata structs for Generic mode
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (15 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 16/32] kasan: only define metadata offsets " andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 18/32] kasan: only define kasan_cache_create " andrey.konovalov
                   ` (15 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Hide the definitions of kasan_alloc_meta and kasan_free_meta under
an ifdef CONFIG_KASAN_GENERIC check, as these structures are now only
used when the Generic mode is enabled.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/kasan.h | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index ab2cd3ff10f3..30ec9ebf52c3 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -192,14 +192,12 @@ struct kasan_track {
 	depot_stack_handle_t stack;
 };
 
+#ifdef CONFIG_KASAN_GENERIC
+
 struct kasan_alloc_meta {
 	struct kasan_track alloc_track;
-	/* Generic mode stores free track in kasan_free_meta. */
-#ifdef CONFIG_KASAN_GENERIC
+	/* Free track is stored in kasan_free_meta. */
 	depot_stack_handle_t aux_stack[2];
-#else
-	struct kasan_track free_track;
-#endif
 };
 
 struct qlist_node {
@@ -218,12 +216,12 @@ struct qlist_node {
  * After that, slab allocator stores the freelist pointer in the object.
  */
 struct kasan_free_meta {
-#ifdef CONFIG_KASAN_GENERIC
 	struct qlist_node quarantine_link;
 	struct kasan_track free_track;
-#endif
 };
 
+#endif /* CONFIG_KASAN_GENERIC */
+
 #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
 /* Used in KUnit-compatible KASAN tests. */
 struct kunit_kasan_status {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 18/32] kasan: only define kasan_cache_create for Generic mode
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (16 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 17/32] kasan: only define metadata structs " andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 19/32] kasan: pass tagged pointers to kasan_save_alloc/free_info andrey.konovalov
                   ` (14 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Right now, kasan_cache_create() assigns SLAB_KASAN for all KASAN modes
and then sets up metadata-related cache parameters for the Generic mode.

SLAB_KASAN is used in two places:

1. In slab_ksize() to account for per-object metadata when
   calculating the size of the accessible memory within the object.
2. In slab_common.c via kasan_never_merge() to prevent merging of
   caches with per-object metadata.

Both cases are only relevant when per-object metadata is present, which
is only the case with the Generic mode.

Thus, assign SLAB_KASAN and define kasan_cache_create() only for the
Generic mode.

Also update the SLAB_KASAN-related comment.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/kasan.h | 18 ++++++------------
 include/linux/slab.h  |  2 +-
 mm/kasan/common.c     | 16 ----------------
 mm/kasan/generic.c    | 17 ++++++++++++++++-
 4 files changed, 23 insertions(+), 30 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index a212c2e3f32d..d811b3d7d2a1 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -128,15 +128,6 @@ static __always_inline void kasan_unpoison_pages(struct page *page,
 		__kasan_unpoison_pages(page, order, init);
 }
 
-void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
-				slab_flags_t *flags);
-static __always_inline void kasan_cache_create(struct kmem_cache *cache,
-				unsigned int *size, slab_flags_t *flags)
-{
-	if (kasan_enabled())
-		__kasan_cache_create(cache, size, flags);
-}
-
 void __kasan_cache_create_kmalloc(struct kmem_cache *cache);
 static __always_inline void kasan_cache_create_kmalloc(struct kmem_cache *cache)
 {
@@ -260,9 +251,6 @@ static inline void kasan_poison_pages(struct page *page, unsigned int order,
 				      bool init) {}
 static inline void kasan_unpoison_pages(struct page *page, unsigned int order,
 					bool init) {}
-static inline void kasan_cache_create(struct kmem_cache *cache,
-				      unsigned int *size,
-				      slab_flags_t *flags) {}
 static inline void kasan_cache_create_kmalloc(struct kmem_cache *cache) {}
 static inline void kasan_poison_slab(struct slab *slab) {}
 static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
@@ -316,6 +304,8 @@ static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
 
 size_t kasan_metadata_size(struct kmem_cache *cache);
 slab_flags_t kasan_never_merge(void);
+void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
+			slab_flags_t *flags);
 
 void kasan_cache_shrink(struct kmem_cache *cache);
 void kasan_cache_shutdown(struct kmem_cache *cache);
@@ -334,6 +324,10 @@ static inline slab_flags_t kasan_never_merge(void)
 {
 	return 0;
 }
+/* And no cache-related metadata initialization is required. */
+static inline void kasan_cache_create(struct kmem_cache *cache,
+				      unsigned int *size,
+				      slab_flags_t *flags) {}
 
 static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
 static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 0fefdf528e0d..1c6b7362e82b 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -106,7 +106,7 @@
 # define SLAB_ACCOUNT		0
 #endif
 
-#ifdef CONFIG_KASAN
+#ifdef CONFIG_KASAN_GENERIC
 #define SLAB_KASAN		((slab_flags_t __force)0x08000000U)
 #else
 #define SLAB_KASAN		0
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index f8ef40fa31e3..f937b6c9e86a 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -109,22 +109,6 @@ void __kasan_poison_pages(struct page *page, unsigned int order, bool init)
 			     KASAN_PAGE_FREE, init);
 }
 
-void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
-			  slab_flags_t *flags)
-{
-	/*
-	 * SLAB_KASAN is used to mark caches as ones that are sanitized by
-	 * KASAN. Currently this flag is used in two places:
-	 * 1. In slab_ksize() when calculating the size of the accessible
-	 *    memory within the object.
-	 * 2. In slab_common.c to prevent merging of sanitized caches.
-	 */
-	*flags |= SLAB_KASAN;
-
-	if (kasan_requires_meta())
-		kasan_init_cache_meta(cache, size);
-}
-
 void __kasan_cache_create_kmalloc(struct kmem_cache *cache)
 {
 	cache->kasan_info.is_kmalloc = true;
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 25333bf3c99f..f6bef347de87 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -352,11 +352,26 @@ static inline unsigned int optimal_redzone(unsigned int object_size)
 		object_size <= (1 << 16) - 1024 ? 1024 : 2048;
 }
 
-void kasan_init_cache_meta(struct kmem_cache *cache, unsigned int *size)
+void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
+			  slab_flags_t *flags)
 {
 	unsigned int ok_size;
 	unsigned int optimal_size;
 
+	if (!kasan_requires_meta())
+		return;
+
+	/*
+	 * SLAB_KASAN is used to mark caches that are sanitized by KASAN
+	 * and that thus have per-object metadata.
+	 * Currently this flag is used in two places:
+	 * 1. In slab_ksize() to account for per-object metadata when
+	 *    calculating the size of the accessible memory within the object.
+	 * 2. In slab_common.c via kasan_never_merge() to prevent merging of
+	 *    caches with per-object metadata.
+	 */
+	*flags |= SLAB_KASAN;
+
 	ok_size = *size;
 
 	/* Add alloc meta into redzone. */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 19/32] kasan: pass tagged pointers to kasan_save_alloc/free_info
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (17 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 18/32] kasan: only define kasan_cache_create " andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-20  9:54   ` Marco Elver
  2022-06-13 20:14 ` [PATCH 20/32] kasan: move kasan_get_alloc/free_track definitions andrey.konovalov
                   ` (13 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Pass tagged pointers to kasan_save_alloc/free_info().

This is a preparatory patch to simplify other changes in the series.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/common.c  | 4 ++--
 mm/kasan/generic.c | 3 +--
 mm/kasan/kasan.h   | 2 +-
 mm/kasan/tags.c    | 3 +--
 4 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index f937b6c9e86a..519fd0b3040b 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -227,7 +227,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
 		return false;
 
 	if (kasan_stack_collection_enabled())
-		kasan_save_free_info(cache, object, tag);
+		kasan_save_free_info(cache, tagged_object);
 
 	return kasan_quarantine_put(cache, object);
 }
@@ -316,7 +316,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
 
 	/* Save alloc info (if possible) for non-kmalloc() allocations. */
 	if (kasan_stack_collection_enabled() && !cache->kasan_info.is_kmalloc)
-		kasan_save_alloc_info(cache, (void *)object, flags);
+		kasan_save_alloc_info(cache, tagged_object, flags);
 
 	return tagged_object;
 }
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index f6bef347de87..aff39af3c532 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -500,8 +500,7 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
 		kasan_set_track(&alloc_meta->alloc_track, flags);
 }
 
-void kasan_save_free_info(struct kmem_cache *cache,
-				void *object, u8 tag)
+void kasan_save_free_info(struct kmem_cache *cache, void *object)
 {
 	struct kasan_free_meta *free_meta;
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 30ec9ebf52c3..e8329935fbfb 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -308,7 +308,7 @@ static inline void kasan_init_object_meta(struct kmem_cache *cache, const void *
 depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
 void kasan_set_track(struct kasan_track *track, gfp_t flags);
 void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
-void kasan_save_free_info(struct kmem_cache *cache, void *object, u8 tag);
+void kasan_save_free_info(struct kmem_cache *cache, void *object);
 struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
 						void *object);
 struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 4f24669085e9..fd11d10a4ffc 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -21,8 +21,7 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
 {
 }
 
-void kasan_save_free_info(struct kmem_cache *cache,
-				void *object, u8 tag)
+void kasan_save_free_info(struct kmem_cache *cache, void *object)
 {
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 20/32] kasan: move kasan_get_alloc/free_track definitions
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (18 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 19/32] kasan: pass tagged pointers to kasan_save_alloc/free_info andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 21/32] kasan: simplify invalid-free reporting andrey.konovalov
                   ` (12 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Move the definitions of kasan_get_alloc/free_track() to report_*.c, as
they belong with other the reporting code.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/generic.c        | 21 ---------------------
 mm/kasan/report_generic.c | 21 +++++++++++++++++++++
 mm/kasan/report_tags.c    | 12 ++++++++++++
 mm/kasan/tags.c           | 12 ------------
 4 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index aff39af3c532..d8b5590f9484 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -512,24 +512,3 @@ void kasan_save_free_info(struct kmem_cache *cache, void *object)
 	/* The object was freed and has free track set. */
 	*(u8 *)kasan_mem_to_shadow(object) = KASAN_SLAB_FREETRACK;
 }
-
-struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
-						void *object)
-{
-	struct kasan_alloc_meta *alloc_meta;
-
-	alloc_meta = kasan_get_alloc_meta(cache, object);
-	if (!alloc_meta)
-		return NULL;
-
-	return &alloc_meta->alloc_track;
-}
-
-struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
-						void *object, u8 tag)
-{
-	if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_SLAB_FREETRACK)
-		return NULL;
-	/* Free meta must be present with KASAN_SLAB_FREETRACK. */
-	return &kasan_get_free_meta(cache, object)->free_track;
-}
diff --git a/mm/kasan/report_generic.c b/mm/kasan/report_generic.c
index 348dc207d462..74d21786ef09 100644
--- a/mm/kasan/report_generic.c
+++ b/mm/kasan/report_generic.c
@@ -127,6 +127,27 @@ const char *kasan_get_bug_type(struct kasan_report_info *info)
 	return get_wild_bug_type(info);
 }
 
+struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
+						void *object)
+{
+	struct kasan_alloc_meta *alloc_meta;
+
+	alloc_meta = kasan_get_alloc_meta(cache, object);
+	if (!alloc_meta)
+		return NULL;
+
+	return &alloc_meta->alloc_track;
+}
+
+struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
+						void *object, u8 tag)
+{
+	if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_SLAB_FREETRACK)
+		return NULL;
+	/* Free meta must be present with KASAN_SLAB_FREETRACK. */
+	return &kasan_get_free_meta(cache, object)->free_track;
+}
+
 void kasan_metadata_fetch_row(char *buffer, void *row)
 {
 	memcpy(buffer, kasan_mem_to_shadow(row), META_BYTES_PER_ROW);
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 35cf3cae4aa4..79b6497d8a81 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -21,3 +21,15 @@ const char *kasan_get_bug_type(struct kasan_report_info *info)
 
 	return "invalid-access";
 }
+
+struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
+						void *object)
+{
+	return NULL;
+}
+
+struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
+						void *object, u8 tag)
+{
+	return NULL;
+}
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index fd11d10a4ffc..39a0481e5228 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -24,15 +24,3 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
 void kasan_save_free_info(struct kmem_cache *cache, void *object)
 {
 }
-
-struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
-						void *object)
-{
-	return NULL;
-}
-
-struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
-						void *object, u8 tag)
-{
-	return NULL;
-}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 21/32] kasan: simplify invalid-free reporting
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (19 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 20/32] kasan: move kasan_get_alloc/free_track definitions andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-21  7:17   ` Kuan-Ying Lee
  2022-06-13 20:14 ` [PATCH 22/32] kasan: cosmetic changes in report.c andrey.konovalov
                   ` (11 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Right now, KASAN uses the kasan_report_type enum to describe report types.

As this enum only has two options, replace it with a bool variable.

Also, unify printing report header for invalid-free and other bug types
in print_error_description().

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/kasan.h  |  7 +------
 mm/kasan/report.c | 16 +++++++---------
 2 files changed, 8 insertions(+), 15 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index e8329935fbfb..f696d50b09fb 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -146,16 +146,11 @@ static inline bool kasan_requires_meta(void)
 #define META_MEM_BYTES_PER_ROW (META_BYTES_PER_ROW * KASAN_GRANULE_SIZE)
 #define META_ROWS_AROUND_ADDR 2
 
-enum kasan_report_type {
-	KASAN_REPORT_ACCESS,
-	KASAN_REPORT_INVALID_FREE,
-};
-
 struct kasan_report_info {
-	enum kasan_report_type type;
 	void *access_addr;
 	void *first_bad_addr;
 	size_t access_size;
+	bool is_free;
 	bool is_write;
 	unsigned long ip;
 };
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index f951fd39db74..7269b6249488 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -175,14 +175,12 @@ static void end_report(unsigned long *flags, void *addr)
 
 static void print_error_description(struct kasan_report_info *info)
 {
-	if (info->type == KASAN_REPORT_INVALID_FREE) {
-		pr_err("BUG: KASAN: double-free or invalid-free in %pS\n",
-		       (void *)info->ip);
-		return;
-	}
+	const char *bug_type = info->is_free ?
+		"double-free or invalid-free" : kasan_get_bug_type(info);
 
-	pr_err("BUG: KASAN: %s in %pS\n",
-		kasan_get_bug_type(info), (void *)info->ip);
+	pr_err("BUG: KASAN: %s in %pS\n", bug_type, (void *)info->ip);
+	if (info->is_free)
+		return;
 	if (info->access_size)
 		pr_err("%s of size %zu at addr %px by task %s/%d\n",
 			info->is_write ? "Write" : "Read", info->access_size,
@@ -435,11 +433,11 @@ void kasan_report_invalid_free(void *ptr, unsigned long ip)
 
 	start_report(&flags, true);
 
-	info.type = KASAN_REPORT_INVALID_FREE;
 	info.access_addr = ptr;
 	info.first_bad_addr = kasan_reset_tag(ptr);
 	info.access_size = 0;
 	info.is_write = false;
+	info.is_free = true;
 	info.ip = ip;
 
 	print_report(&info);
@@ -468,11 +466,11 @@ bool kasan_report(unsigned long addr, size_t size, bool is_write,
 
 	start_report(&irq_flags, true);
 
-	info.type = KASAN_REPORT_ACCESS;
 	info.access_addr = ptr;
 	info.first_bad_addr = kasan_find_first_bad_addr(ptr, size);
 	info.access_size = size;
 	info.is_write = is_write;
+	info.is_free = false;
 	info.ip = ip;
 
 	print_report(&info);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 22/32] kasan: cosmetic changes in report.c
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (20 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 21/32] kasan: simplify invalid-free reporting andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 23/32] kasan: use kasan_addr_to_slab in print_address_description andrey.konovalov
                   ` (10 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Do a few non-functional style fixes for the code in report.c.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/report.c | 11 ++++-------
 1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 7269b6249488..879f949dc395 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -194,25 +194,22 @@ static void print_error_description(struct kasan_report_info *info)
 static void print_track(struct kasan_track *track, const char *prefix)
 {
 	pr_err("%s by task %u:\n", prefix, track->pid);
-	if (track->stack) {
+	if (track->stack)
 		stack_depot_print(track->stack);
-	} else {
+	else
 		pr_err("(stack is not available)\n");
-	}
 }
 
 struct page *kasan_addr_to_page(const void *addr)
 {
-	if ((addr >= (void *)PAGE_OFFSET) &&
-			(addr < high_memory))
+	if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
 		return virt_to_head_page(addr);
 	return NULL;
 }
 
 struct slab *kasan_addr_to_slab(const void *addr)
 {
-	if ((addr >= (void *)PAGE_OFFSET) &&
-			(addr < high_memory))
+	if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
 		return virt_to_slab(addr);
 	return NULL;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 23/32] kasan: use kasan_addr_to_slab in print_address_description
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (21 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 22/32] kasan: cosmetic changes in report.c andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 24/32] kasan: move kasan_addr_to_slab to common.c andrey.konovalov
                   ` (9 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Use the kasan_addr_to_slab() helper in print_address_description()
instead of separately invoking PageSlab() and page_slab().

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/report.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 879f949dc395..1dd6fc8a678f 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -291,12 +291,12 @@ static inline bool init_task_stack_addr(const void *addr)
 static void print_address_description(void *addr, u8 tag)
 {
 	struct page *page = kasan_addr_to_page(addr);
+	struct slab *slab = kasan_addr_to_slab(addr);
 
 	dump_stack_lvl(KERN_ERR);
 	pr_err("\n");
 
-	if (page && PageSlab(page)) {
-		struct slab *slab = page_slab(page);
+	if (slab) {
 		struct kmem_cache *cache = slab->slab_cache;
 		void *object = nearest_obj(cache, slab,	addr);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 24/32] kasan: move kasan_addr_to_slab to common.c
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (22 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 23/32] kasan: use kasan_addr_to_slab in print_address_description andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-15 13:27   ` kernel test robot
  2022-06-13 20:14 ` [PATCH 25/32] kasan: make kasan_addr_to_page static andrey.konovalov
                   ` (8 subsequent siblings)
  32 siblings, 1 reply; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Move the definition of kasan_addr_to_slab() to the common KASAN code,
as this function is not only used by the reporting code.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/common.c | 7 +++++++
 mm/kasan/report.c | 7 -------
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 519fd0b3040b..5d5b4cfae503 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -30,6 +30,13 @@
 #include "kasan.h"
 #include "../slab.h"
 
+struct slab *kasan_addr_to_slab(const void *addr)
+{
+	if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
+		return virt_to_slab(addr);
+	return NULL;
+}
+
 depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc)
 {
 	unsigned long entries[KASAN_STACK_DEPTH];
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 1dd6fc8a678f..ed8234516bab 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -207,13 +207,6 @@ struct page *kasan_addr_to_page(const void *addr)
 	return NULL;
 }
 
-struct slab *kasan_addr_to_slab(const void *addr)
-{
-	if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
-		return virt_to_slab(addr);
-	return NULL;
-}
-
 static void describe_object_addr(struct kmem_cache *cache, void *object,
 				const void *addr)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 25/32] kasan: make kasan_addr_to_page static
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (23 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 24/32] kasan: move kasan_addr_to_slab to common.c andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 26/32] kasan: simplify print_report andrey.konovalov
                   ` (7 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

As kasan_addr_to_page() is only used in report.c, rename it to
addr_to_page() and make it static.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/kasan.h  | 1 -
 mm/kasan/report.c | 4 ++--
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index f696d50b09fb..e3f100833154 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -285,7 +285,6 @@ bool kasan_report(unsigned long addr, size_t size,
 		bool is_write, unsigned long ip);
 void kasan_report_invalid_free(void *object, unsigned long ip);
 
-struct page *kasan_addr_to_page(const void *addr);
 struct slab *kasan_addr_to_slab(const void *addr);
 
 #ifdef CONFIG_KASAN_GENERIC
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index ed8234516bab..f3ec6f86b199 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -200,7 +200,7 @@ static void print_track(struct kasan_track *track, const char *prefix)
 		pr_err("(stack is not available)\n");
 }
 
-struct page *kasan_addr_to_page(const void *addr)
+static inline struct page *addr_to_page(const void *addr)
 {
 	if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
 		return virt_to_head_page(addr);
@@ -283,7 +283,7 @@ static inline bool init_task_stack_addr(const void *addr)
 
 static void print_address_description(void *addr, u8 tag)
 {
-	struct page *page = kasan_addr_to_page(addr);
+	struct page *page = addr_to_page(addr);
 	struct slab *slab = kasan_addr_to_slab(addr);
 
 	dump_stack_lvl(KERN_ERR);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 26/32] kasan: simplify print_report
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (24 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 25/32] kasan: make kasan_addr_to_page static andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 27/32] kasan: introduce complete_report_info andrey.konovalov
                   ` (6 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

To simplify reading the implementation of print_report(), remove the
tagged_addr variable and rename untagged_addr to addr.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/report.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index f3ec6f86b199..cc35c8c1a367 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -391,17 +391,16 @@ static void print_memory_metadata(const void *addr)
 
 static void print_report(struct kasan_report_info *info)
 {
-	void *tagged_addr = info->access_addr;
-	void *untagged_addr = kasan_reset_tag(tagged_addr);
-	u8 tag = get_tag(tagged_addr);
+	void *addr = kasan_reset_tag(info->access_addr);
+	u8 tag = get_tag(info->access_addr);
 
 	print_error_description(info);
-	if (addr_has_metadata(untagged_addr))
+	if (addr_has_metadata(addr))
 		kasan_print_tags(tag, info->first_bad_addr);
 	pr_err("\n");
 
-	if (addr_has_metadata(untagged_addr)) {
-		print_address_description(untagged_addr, tag);
+	if (addr_has_metadata(addr)) {
+		print_address_description(addr, tag);
 		print_memory_metadata(info->first_bad_addr);
 	} else {
 		dump_stack_lvl(KERN_ERR);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 27/32] kasan: introduce complete_report_info
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (25 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 26/32] kasan: simplify print_report andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 28/32] kasan: fill in cache and object in complete_report_info andrey.konovalov
                   ` (5 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Introduce a complete_report_info() function that fills in the
first_bad_addr field of kasan_report_info instead of doing it in
kasan_report_*().

This function will be extended in the next patch.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/kasan.h  |  5 ++++-
 mm/kasan/report.c | 17 +++++++++++++++--
 2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index e3f100833154..0261d1530055 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -147,12 +147,15 @@ static inline bool kasan_requires_meta(void)
 #define META_ROWS_AROUND_ADDR 2
 
 struct kasan_report_info {
+	/* Filled in by kasan_report_*(). */
 	void *access_addr;
-	void *first_bad_addr;
 	size_t access_size;
 	bool is_free;
 	bool is_write;
 	unsigned long ip;
+
+	/* Filled in by the common reporting code. */
+	void *first_bad_addr;
 };
 
 /* Do not change the struct layout: compiler ABI. */
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index cc35c8c1a367..214ba7cb654c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -407,6 +407,17 @@ static void print_report(struct kasan_report_info *info)
 	}
 }
 
+static void complete_report_info(struct kasan_report_info *info)
+{
+	void *addr = kasan_reset_tag(info->access_addr);
+
+	if (info->is_free)
+		info->first_bad_addr = addr;
+	else
+		info->first_bad_addr = kasan_find_first_bad_addr(
+					info->access_addr, info->access_size);
+}
+
 void kasan_report_invalid_free(void *ptr, unsigned long ip)
 {
 	unsigned long flags;
@@ -423,12 +434,13 @@ void kasan_report_invalid_free(void *ptr, unsigned long ip)
 	start_report(&flags, true);
 
 	info.access_addr = ptr;
-	info.first_bad_addr = kasan_reset_tag(ptr);
 	info.access_size = 0;
 	info.is_write = false;
 	info.is_free = true;
 	info.ip = ip;
 
+	complete_report_info(&info);
+
 	print_report(&info);
 
 	end_report(&flags, ptr);
@@ -456,12 +468,13 @@ bool kasan_report(unsigned long addr, size_t size, bool is_write,
 	start_report(&irq_flags, true);
 
 	info.access_addr = ptr;
-	info.first_bad_addr = kasan_find_first_bad_addr(ptr, size);
 	info.access_size = size;
 	info.is_write = is_write;
 	info.is_free = false;
 	info.ip = ip;
 
+	complete_report_info(&info);
+
 	print_report(&info);
 
 	end_report(&irq_flags, ptr);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 28/32] kasan: fill in cache and object in complete_report_info
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (26 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 27/32] kasan: introduce complete_report_info andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 29/32] kasan: rework function arguments in report.c andrey.konovalov
                   ` (4 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Add cache and object fields to kasan_report_info and fill them in in
complete_report_info() instead of fetching them in the middle of the
report printing code.

This allows the reporting code to get access to the object information
before starting printing the report. One of the following patches uses
this information to determine the bug type with the tag-based modes.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/kasan.h  |  2 ++
 mm/kasan/report.c | 21 +++++++++++++--------
 2 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 0261d1530055..b9bd9f1656bf 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -156,6 +156,8 @@ struct kasan_report_info {
 
 	/* Filled in by the common reporting code. */
 	void *first_bad_addr;
+	struct kmem_cache *cache;
+	void *object;
 };
 
 /* Do not change the struct layout: compiler ABI. */
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 214ba7cb654c..a6b36eb4c33b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -281,19 +281,16 @@ static inline bool init_task_stack_addr(const void *addr)
 			sizeof(init_thread_union.stack));
 }
 
-static void print_address_description(void *addr, u8 tag)
+static void print_address_description(void *addr, u8 tag,
+				      struct kasan_report_info *info)
 {
 	struct page *page = addr_to_page(addr);
-	struct slab *slab = kasan_addr_to_slab(addr);
 
 	dump_stack_lvl(KERN_ERR);
 	pr_err("\n");
 
-	if (slab) {
-		struct kmem_cache *cache = slab->slab_cache;
-		void *object = nearest_obj(cache, slab,	addr);
-
-		describe_object(cache, object, addr, tag);
+	if (info->cache && info->object) {
+		describe_object(info->cache, info->object, addr, tag);
 		pr_err("\n");
 	}
 
@@ -400,7 +397,7 @@ static void print_report(struct kasan_report_info *info)
 	pr_err("\n");
 
 	if (addr_has_metadata(addr)) {
-		print_address_description(addr, tag);
+		print_address_description(addr, tag, info);
 		print_memory_metadata(info->first_bad_addr);
 	} else {
 		dump_stack_lvl(KERN_ERR);
@@ -410,12 +407,20 @@ static void print_report(struct kasan_report_info *info)
 static void complete_report_info(struct kasan_report_info *info)
 {
 	void *addr = kasan_reset_tag(info->access_addr);
+	struct slab *slab;
 
 	if (info->is_free)
 		info->first_bad_addr = addr;
 	else
 		info->first_bad_addr = kasan_find_first_bad_addr(
 					info->access_addr, info->access_size);
+
+	slab = kasan_addr_to_slab(addr);
+	if (slab) {
+		info->cache = slab->slab_cache;
+		info->object = nearest_obj(info->cache, slab, addr);
+	} else
+		info->cache = info->object = NULL;
 }
 
 void kasan_report_invalid_free(void *ptr, unsigned long ip)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 29/32] kasan: rework function arguments in report.c
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (27 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 28/32] kasan: fill in cache and object in complete_report_info andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 30/32] kasan: introduce kasan_complete_mode_report_info andrey.konovalov
                   ` (3 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Pass a pointer to kasan_report_info to describe_object() and
describe_object_stacks(), instead of passing the structure's fields.

The untagged pointer and the tag are still passed as separate arguments
to some of the functions to avoid duplicating the untagging logic.

This is preparatory change for the next patch.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/report.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index a6b36eb4c33b..a2789d4a05dd 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -207,8 +207,8 @@ static inline struct page *addr_to_page(const void *addr)
 	return NULL;
 }
 
-static void describe_object_addr(struct kmem_cache *cache, void *object,
-				const void *addr)
+static void describe_object_addr(const void *addr, struct kmem_cache *cache,
+				 void *object)
 {
 	unsigned long access_addr = (unsigned long)addr;
 	unsigned long object_addr = (unsigned long)object;
@@ -236,33 +236,32 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
 		(void *)(object_addr + cache->object_size));
 }
 
-static void describe_object_stacks(struct kmem_cache *cache, void *object,
-					const void *addr, u8 tag)
+static void describe_object_stacks(u8 tag, struct kasan_report_info *info)
 {
 	struct kasan_track *alloc_track;
 	struct kasan_track *free_track;
 
-	alloc_track = kasan_get_alloc_track(cache, object);
+	alloc_track = kasan_get_alloc_track(info->cache, info->object);
 	if (alloc_track) {
 		print_track(alloc_track, "Allocated");
 		pr_err("\n");
 	}
 
-	free_track = kasan_get_free_track(cache, object, tag);
+	free_track = kasan_get_free_track(info->cache, info->object, tag);
 	if (free_track) {
 		print_track(free_track, "Freed");
 		pr_err("\n");
 	}
 
-	kasan_print_aux_stacks(cache, object);
+	kasan_print_aux_stacks(info->cache, info->object);
 }
 
-static void describe_object(struct kmem_cache *cache, void *object,
-				const void *addr, u8 tag)
+static void describe_object(const void *addr, u8 tag,
+			    struct kasan_report_info *info)
 {
 	if (kasan_stack_collection_enabled())
-		describe_object_stacks(cache, object, addr, tag);
-	describe_object_addr(cache, object, addr);
+		describe_object_stacks(tag, info);
+	describe_object_addr(addr, info->cache, info->object);
 }
 
 static inline bool kernel_or_module_addr(const void *addr)
@@ -290,7 +289,7 @@ static void print_address_description(void *addr, u8 tag,
 	pr_err("\n");
 
 	if (info->cache && info->object) {
-		describe_object(info->cache, info->object, addr, tag);
+		describe_object(addr, tag, info);
 		pr_err("\n");
 	}
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 30/32] kasan: introduce kasan_complete_mode_report_info
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (28 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 29/32] kasan: rework function arguments in report.c andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-13 20:14 ` [PATCH 31/32] kasan: implement stack ring for tag-based modes andrey.konovalov
                   ` (2 subsequent siblings)
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Add bug_type and alloc/free_track fields to kasan_report_info and add a
kasan_complete_mode_report_info() function that fills in these fields.
This function is implemented differently for different KASAN mode.

Change the reporting code to use the filled in fields instead of
invoking kasan_get_bug_type() and kasan_get_alloc/free_track().

For the Generic mode, kasan_complete_mode_report_info() invokes these
functions instead. For the tag-based modes, only the bug_type field is
filled in; alloc/free_track are handled in the next patch.

Using a single function that fills in these fields is required for the
tag-based modes, as the values for all three fields are determined in a
single procedure implemented in the following patch.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/kasan.h          | 33 +++++++++++++++++----------------
 mm/kasan/report.c         | 29 ++++++++++++++---------------
 mm/kasan/report_generic.c | 32 +++++++++++++++++---------------
 mm/kasan/report_tags.c    | 13 +++----------
 4 files changed, 51 insertions(+), 56 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index b9bd9f1656bf..c51cea31ced0 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -146,6 +146,13 @@ static inline bool kasan_requires_meta(void)
 #define META_MEM_BYTES_PER_ROW (META_BYTES_PER_ROW * KASAN_GRANULE_SIZE)
 #define META_ROWS_AROUND_ADDR 2
 
+#define KASAN_STACK_DEPTH 64
+
+struct kasan_track {
+	u32 pid;
+	depot_stack_handle_t stack;
+};
+
 struct kasan_report_info {
 	/* Filled in by kasan_report_*(). */
 	void *access_addr;
@@ -158,6 +165,11 @@ struct kasan_report_info {
 	void *first_bad_addr;
 	struct kmem_cache *cache;
 	void *object;
+
+	/* Filled in by the mode-specific reporting code. */
+	const char *bug_type;
+	struct kasan_track alloc_track;
+	struct kasan_track free_track;
 };
 
 /* Do not change the struct layout: compiler ABI. */
@@ -183,14 +195,7 @@ struct kasan_global {
 #endif
 };
 
-/* Structures for keeping alloc and free tracks. */
-
-#define KASAN_STACK_DEPTH 64
-
-struct kasan_track {
-	u32 pid;
-	depot_stack_handle_t stack;
-};
+/* Structures for keeping alloc and free meta. */
 
 #ifdef CONFIG_KASAN_GENERIC
 
@@ -264,16 +269,16 @@ static inline bool addr_has_metadata(const void *addr)
 
 #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
 
+void *kasan_find_first_bad_addr(void *addr, size_t size);
+void kasan_complete_mode_report_info(struct kasan_report_info *info);
+void kasan_metadata_fetch_row(char *buffer, void *row);
+
 #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
 void kasan_print_tags(u8 addr_tag, const void *addr);
 #else
 static inline void kasan_print_tags(u8 addr_tag, const void *addr) { }
 #endif
 
-void *kasan_find_first_bad_addr(void *addr, size_t size);
-const char *kasan_get_bug_type(struct kasan_report_info *info);
-void kasan_metadata_fetch_row(char *buffer, void *row);
-
 #if defined(CONFIG_KASAN_STACK)
 void kasan_print_address_stack_frame(const void *addr);
 #else
@@ -308,10 +313,6 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
 void kasan_set_track(struct kasan_track *track, gfp_t flags);
 void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
 void kasan_save_free_info(struct kmem_cache *cache, void *object);
-struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
-						void *object);
-struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
-						void *object, u8 tag);
 
 #if defined(CONFIG_KASAN_GENERIC) && \
 	(defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index a2789d4a05dd..206b7fe64e6b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -176,7 +176,7 @@ static void end_report(unsigned long *flags, void *addr)
 static void print_error_description(struct kasan_report_info *info)
 {
 	const char *bug_type = info->is_free ?
-		"double-free or invalid-free" : kasan_get_bug_type(info);
+		"double-free or invalid-free" : info->bug_type;
 
 	pr_err("BUG: KASAN: %s in %pS\n", bug_type, (void *)info->ip);
 	if (info->is_free)
@@ -236,31 +236,25 @@ static void describe_object_addr(const void *addr, struct kmem_cache *cache,
 		(void *)(object_addr + cache->object_size));
 }
 
-static void describe_object_stacks(u8 tag, struct kasan_report_info *info)
+static void describe_object_stacks(struct kasan_report_info *info)
 {
-	struct kasan_track *alloc_track;
-	struct kasan_track *free_track;
-
-	alloc_track = kasan_get_alloc_track(info->cache, info->object);
-	if (alloc_track) {
-		print_track(alloc_track, "Allocated");
+	if (info->alloc_track.stack) {
+		print_track(&info->alloc_track, "Allocated");
 		pr_err("\n");
 	}
 
-	free_track = kasan_get_free_track(info->cache, info->object, tag);
-	if (free_track) {
-		print_track(free_track, "Freed");
+	if (info->free_track.stack) {
+		print_track(&info->free_track, "Freed");
 		pr_err("\n");
 	}
 
 	kasan_print_aux_stacks(info->cache, info->object);
 }
 
-static void describe_object(const void *addr, u8 tag,
-			    struct kasan_report_info *info)
+static void describe_object(const void *addr, struct kasan_report_info *info)
 {
 	if (kasan_stack_collection_enabled())
-		describe_object_stacks(tag, info);
+		describe_object_stacks(info);
 	describe_object_addr(addr, info->cache, info->object);
 }
 
@@ -289,7 +283,7 @@ static void print_address_description(void *addr, u8 tag,
 	pr_err("\n");
 
 	if (info->cache && info->object) {
-		describe_object(addr, tag, info);
+		describe_object(addr, info);
 		pr_err("\n");
 	}
 
@@ -420,6 +414,9 @@ static void complete_report_info(struct kasan_report_info *info)
 		info->object = nearest_obj(info->cache, slab, addr);
 	} else
 		info->cache = info->object = NULL;
+
+	/* Fill in mode-specific report info fields. */
+	kasan_complete_mode_report_info(info);
 }
 
 void kasan_report_invalid_free(void *ptr, unsigned long ip)
@@ -437,6 +434,7 @@ void kasan_report_invalid_free(void *ptr, unsigned long ip)
 
 	start_report(&flags, true);
 
+	memset(&info, 0, sizeof(info));
 	info.access_addr = ptr;
 	info.access_size = 0;
 	info.is_write = false;
@@ -471,6 +469,7 @@ bool kasan_report(unsigned long addr, size_t size, bool is_write,
 
 	start_report(&irq_flags, true);
 
+	memset(&info, 0, sizeof(info));
 	info.access_addr = ptr;
 	info.access_size = size;
 	info.is_write = is_write;
diff --git a/mm/kasan/report_generic.c b/mm/kasan/report_generic.c
index 74d21786ef09..087c1d8c8145 100644
--- a/mm/kasan/report_generic.c
+++ b/mm/kasan/report_generic.c
@@ -109,7 +109,7 @@ static const char *get_wild_bug_type(struct kasan_report_info *info)
 	return bug_type;
 }
 
-const char *kasan_get_bug_type(struct kasan_report_info *info)
+static const char *get_bug_type(struct kasan_report_info *info)
 {
 	/*
 	 * If access_size is a negative number, then it has reason to be
@@ -127,25 +127,27 @@ const char *kasan_get_bug_type(struct kasan_report_info *info)
 	return get_wild_bug_type(info);
 }
 
-struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
-						void *object)
+void kasan_complete_mode_report_info(struct kasan_report_info *info)
 {
 	struct kasan_alloc_meta *alloc_meta;
+	struct kasan_free_meta *free_meta;
 
-	alloc_meta = kasan_get_alloc_meta(cache, object);
-	if (!alloc_meta)
-		return NULL;
+	info->bug_type = get_bug_type(info);
 
-	return &alloc_meta->alloc_track;
-}
+	if (!info->cache || !info->object)
+		return;
 
-struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
-						void *object, u8 tag)
-{
-	if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_SLAB_FREETRACK)
-		return NULL;
-	/* Free meta must be present with KASAN_SLAB_FREETRACK. */
-	return &kasan_get_free_meta(cache, object)->free_track;
+	alloc_meta = kasan_get_alloc_meta(info->cache, info->object);
+	if (alloc_meta)
+		memcpy(&info->alloc_track, &alloc_meta->alloc_track,
+		       sizeof(info->alloc_track));
+
+	if (*(u8 *)kasan_mem_to_shadow(info->object) == KASAN_SLAB_FREETRACK) {
+		/* Free meta must be present with KASAN_SLAB_FREETRACK. */
+		free_meta = kasan_get_free_meta(info->cache, info->object);
+		memcpy(&info->free_track, &free_meta->free_track,
+		       sizeof(info->free_track));
+	}
 }
 
 void kasan_metadata_fetch_row(char *buffer, void *row)
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 79b6497d8a81..5cbac2cdb177 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -6,7 +6,7 @@
 
 #include "kasan.h"
 
-const char *kasan_get_bug_type(struct kasan_report_info *info)
+static const char *get_bug_type(struct kasan_report_info *info)
 {
 	/*
 	 * If access_size is a negative number, then it has reason to be
@@ -22,14 +22,7 @@ const char *kasan_get_bug_type(struct kasan_report_info *info)
 	return "invalid-access";
 }
 
-struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
-						void *object)
+void kasan_complete_mode_report_info(struct kasan_report_info *info)
 {
-	return NULL;
-}
-
-struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
-						void *object, u8 tag)
-{
-	return NULL;
+	info->bug_type = get_bug_type(info);
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 31/32] kasan: implement stack ring for tag-based modes
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (29 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 30/32] kasan: introduce kasan_complete_mode_report_info andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-20 13:35   ` Marco Elver
  2022-06-13 20:14 ` [PATCH 32/32] kasan: better identify bug types " andrey.konovalov
  2022-06-17  9:32 ` [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata Marco Elver
  32 siblings, 1 reply; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Implement storing stack depot handles for alloc/free stack traces for
slab objects for the tag-based KASAN modes in a ring buffer.

This ring buffer is referred to as the stack ring.

On each alloc/free of a slab object, the tagged address of the object and
the current stack trace are recorded in the stack ring.

On each bug report, if the accessed address belongs to a slab object, the
stack ring is scanned for matching entries. The newest entries are used to
print the alloc/free stack traces in the report: one entry for alloc and
one for free.

The ring buffer is lock-free.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

---

The number of entries in the stack ring is fixed in this version of the
patch. We could either implement it as a config option or a command-line
argument. I tilt towards the latter option and will implement it in v2
unless there are objections.
---
 mm/kasan/kasan.h       | 20 ++++++++++++++
 mm/kasan/report_tags.c | 61 ++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/tags.c        | 30 +++++++++++++++++++++
 3 files changed, 111 insertions(+)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index c51cea31ced0..da9a3c56ef4b 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -2,6 +2,7 @@
 #ifndef __MM_KASAN_KASAN_H
 #define __MM_KASAN_KASAN_H
 
+#include <linux/atomic.h>
 #include <linux/kasan.h>
 #include <linux/kasan-tags.h>
 #include <linux/kfence.h>
@@ -227,6 +228,25 @@ struct kasan_free_meta {
 
 #endif /* CONFIG_KASAN_GENERIC */
 
+#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
+
+struct kasan_stack_ring_entry {
+	atomic64_t ptr;		/* void * */
+	atomic64_t size;	/* size_t */
+	atomic_t pid;		/* u32 */
+	atomic_t stack;		/* depot_stack_handle_t */
+	atomic_t is_free;	/* bool */
+};
+
+#define KASAN_STACK_RING_ENTRIES (32 << 10)
+
+struct kasan_stack_ring {
+	atomic64_t pos;
+	struct kasan_stack_ring_entry entries[KASAN_STACK_RING_ENTRIES];
+};
+
+#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
+
 #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
 /* Used in KUnit-compatible KASAN tests. */
 struct kunit_kasan_status {
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 5cbac2cdb177..21911d1883d3 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -4,8 +4,12 @@
  * Copyright (c) 2020 Google, Inc.
  */
 
+#include <linux/atomic.h>
+
 #include "kasan.h"
 
+extern struct kasan_stack_ring stack_ring;
+
 static const char *get_bug_type(struct kasan_report_info *info)
 {
 	/*
@@ -24,5 +28,62 @@ static const char *get_bug_type(struct kasan_report_info *info)
 
 void kasan_complete_mode_report_info(struct kasan_report_info *info)
 {
+	u64 pos;
+	struct kasan_stack_ring_entry *entry;
+	void *object;
+	u32 pid;
+	depot_stack_handle_t stack;
+	bool is_free;
+	bool alloc_found = false, free_found = false;
+
 	info->bug_type = get_bug_type(info);
+
+	if (!info->cache || !info->object)
+		return;
+
+	pos = atomic64_read(&stack_ring.pos);
+
+	for (u64 i = pos - 1; i != pos - 1 - KASAN_STACK_RING_ENTRIES; i--) {
+		if (alloc_found && free_found)
+			break;
+
+		entry = &stack_ring.entries[i % KASAN_STACK_RING_ENTRIES];
+
+		/* Paired with atomic64_set_release() in save_stack_info(). */
+		object = (void *)atomic64_read_acquire(&entry->ptr);
+
+		if (kasan_reset_tag(object) != info->object ||
+		    get_tag(object) != get_tag(info->access_addr))
+			continue;
+
+		pid = atomic_read(&entry->pid);
+		stack = atomic_read(&entry->stack);
+		is_free = atomic_read(&entry->is_free);
+
+		/* Try detecting if the entry was changed while being read. */
+		smp_mb();
+		if (object != (void *)atomic64_read(&entry->ptr))
+			continue;
+
+		if (is_free) {
+			/*
+			 * Second free of the same object.
+			 * Give up on trying to find the alloc entry.
+			 */
+			if (free_found)
+				break;
+
+			info->free_track.pid = pid;
+			info->free_track.stack = stack;
+			free_found = true;
+		} else {
+			/* Second alloc of the same object. Give up. */
+			if (alloc_found)
+				break;
+
+			info->alloc_track.pid = pid;
+			info->alloc_track.stack = stack;
+			alloc_found = true;
+		}
+	}
 }
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 39a0481e5228..286011307695 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -6,6 +6,7 @@
  * Copyright (c) 2020 Google, Inc.
  */
 
+#include <linux/atomic.h>
 #include <linux/init.h>
 #include <linux/kasan.h>
 #include <linux/kernel.h>
@@ -16,11 +17,40 @@
 #include <linux/types.h>
 
 #include "kasan.h"
+#include "../slab.h"
+
+struct kasan_stack_ring stack_ring;
+
+void save_stack_info(struct kmem_cache *cache, void *object,
+			gfp_t flags, bool is_free)
+{
+	u64 pos;
+	struct kasan_stack_ring_entry *entry;
+	depot_stack_handle_t stack;
+
+	stack = kasan_save_stack(flags, true);
+
+	pos = atomic64_fetch_add(1, &stack_ring.pos);
+	entry = &stack_ring.entries[pos % KASAN_STACK_RING_ENTRIES];
+
+	atomic64_set(&entry->size, cache->object_size);
+	atomic_set(&entry->pid, current->pid);
+	atomic_set(&entry->stack, stack);
+	atomic_set(&entry->is_free, is_free);
+
+	/*
+	 * Paired with atomic64_read_acquire() in
+	 * kasan_complete_mode_report_info().
+	 */
+	atomic64_set_release(&entry->ptr, (s64)object);
+}
 
 void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
 {
+	save_stack_info(cache, object, flags, false);
 }
 
 void kasan_save_free_info(struct kmem_cache *cache, void *object)
 {
+	save_stack_info(cache, object, GFP_NOWAIT, true);
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH 32/32] kasan: better identify bug types for tag-based modes
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (30 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 31/32] kasan: implement stack ring for tag-based modes andrey.konovalov
@ 2022-06-13 20:14 ` andrey.konovalov
  2022-06-17  9:32 ` [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata Marco Elver
  32 siblings, 0 replies; 54+ messages in thread
From: andrey.konovalov @ 2022-06-13 20:14 UTC (permalink / raw)
  To: Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Identify the bug type for the tag-based modes based on the stack trace
entries found in the stack ring.

If a free entry is found first (meaning that it was added last), mark the
bug as use-after-free. If an alloc entry is found first, mark the bug as
slab-out-of-bounds. Otherwise, assign the common bug type.

This change returns the functionalify of the previously dropped
CONFIG_KASAN_TAGS_IDENTIFY.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/report_tags.c | 26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 21911d1883d3..dc1f8fc0327f 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -10,7 +10,7 @@
 
 extern struct kasan_stack_ring stack_ring;
 
-static const char *get_bug_type(struct kasan_report_info *info)
+static const char *get_common_bug_type(struct kasan_report_info *info)
 {
 	/*
 	 * If access_size is a negative number, then it has reason to be
@@ -36,10 +36,10 @@ void kasan_complete_mode_report_info(struct kasan_report_info *info)
 	bool is_free;
 	bool alloc_found = false, free_found = false;
 
-	info->bug_type = get_bug_type(info);
-
-	if (!info->cache || !info->object)
+	if (!info->cache || !info->object) {
+		info->bug_type = get_common_bug_type(info);
 		return;
+	}
 
 	pos = atomic64_read(&stack_ring.pos);
 
@@ -76,6 +76,13 @@ void kasan_complete_mode_report_info(struct kasan_report_info *info)
 			info->free_track.pid = pid;
 			info->free_track.stack = stack;
 			free_found = true;
+
+			/*
+			 * If a free entry is found first, the bug is likely
+			 * a use-after-free.
+			 */
+			if (!info->bug_type)
+				info->bug_type = "use-after-free";
 		} else {
 			/* Second alloc of the same object. Give up. */
 			if (alloc_found)
@@ -84,6 +91,17 @@ void kasan_complete_mode_report_info(struct kasan_report_info *info)
 			info->alloc_track.pid = pid;
 			info->alloc_track.stack = stack;
 			alloc_found = true;
+
+			/*
+			 * If an alloc entry is found first, the bug is likely
+			 * an out-of-bounds.
+			 */
+			if (!info->bug_type)
+				info->bug_type = "slab-out-of-bounds";
 		}
 	}
+
+	/* Assign the common bug type if no entries were found. */
+	if (!info->bug_type)
+		info->bug_type = get_common_bug_type(info);
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* Re: [PATCH 24/32] kasan: move kasan_addr_to_slab to common.c
  2022-06-13 20:14 ` [PATCH 24/32] kasan: move kasan_addr_to_slab to common.c andrey.konovalov
@ 2022-06-15 13:27   ` kernel test robot
  2022-07-18 22:41       ` Andrey Konovalov
  0 siblings, 1 reply; 54+ messages in thread
From: kernel test robot @ 2022-06-15 13:27 UTC (permalink / raw)
  To: andrey.konovalov, Marco Elver, Alexander Potapenko
  Cc: kbuild-all, Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin,
	kasan-dev, Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, Linux Memory Management List, linux-kernel

Hi,

I love your patch! Perhaps something to improve:

[auto build test WARNING on akpm-mm/mm-everything]
[also build test WARNING on linus/master v5.19-rc2 next-20220615]
[cannot apply to vbabka-slab/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/andrey-konovalov-linux-dev/kasan-switch-tag-based-modes-to-stack-ring-from-per-object-metadata/20220614-042239
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
config: s390-allyesconfig (https://download.01.org/0day-ci/archive/20220615/202206152134.sadCRvGk-lkp@intel.com/config)
compiler: s390-linux-gcc (GCC) 11.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/b0b10a57b2d9a5e5ae5d7ca62046b9774df1a88f
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review andrey-konovalov-linux-dev/kasan-switch-tag-based-modes-to-stack-ring-from-per-object-metadata/20220614-042239
        git checkout b0b10a57b2d9a5e5ae5d7ca62046b9774df1a88f
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.3.0 make.cross W=1 O=build_dir ARCH=s390 SHELL=/bin/bash mm/kasan/

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   mm/kasan/common.c: In function 'kasan_addr_to_slab':
>> mm/kasan/common.c:35:19: warning: ordered comparison of pointer with null pointer [-Wextra]
      35 |         if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
         |                   ^~
   mm/kasan/common.c: In function '____kasan_slab_free':
   mm/kasan/common.c:202:12: warning: variable 'tag' set but not used [-Wunused-but-set-variable]
     202 |         u8 tag;
         |            ^~~


vim +35 mm/kasan/common.c

    32	
    33	struct slab *kasan_addr_to_slab(const void *addr)
    34	{
  > 35		if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
    36			return virt_to_slab(addr);
    37		return NULL;
    38	}
    39	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata
  2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
                   ` (31 preceding siblings ...)
  2022-06-13 20:14 ` [PATCH 32/32] kasan: better identify bug types " andrey.konovalov
@ 2022-06-17  9:32 ` Marco Elver
  2022-07-18 22:41   ` Andrey Konovalov
  32 siblings, 1 reply; 54+ messages in thread
From: Marco Elver @ 2022-06-17  9:32 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, Jun 13, 2022 at 10:13PM +0200, andrey.konovalov@linux.dev wrote:
> From: Andrey Konovalov <andreyknvl@google.com>
> 
> This series makes the tag-based KASAN modes use a ring buffer for storing
> stack depot handles for alloc/free stack traces for slab objects instead
> of per-object metadata. This ring buffer is referred to as the stack ring.
> 
> On each alloc/free of a slab object, the tagged address of the object and
> the current stack trace are recorded in the stack ring.
> 
> On each bug report, if the accessed address belongs to a slab object, the
> stack ring is scanned for matching entries. The newest entries are used to
> print the alloc/free stack traces in the report: one entry for alloc and
> one for free.
> 
> The ring buffer is lock-free.
> 
> The advantages of this approach over storing stack trace handles in
> per-object metadata with the tag-based KASAN modes:
> 
> - Allows to find relevant stack traces for use-after-free bugs without
>   using quarantine for freed memory. (Currently, if the object was
>   reallocated multiple times, the report contains the latest alloc/free
>   stack traces, not necessarily the ones relevant to the buggy allocation.)
> - Allows to better identify and mark use-after-free bugs, effectively
>   making the CONFIG_KASAN_TAGS_IDENTIFY functionality always-on.
> - Has fixed memory overhead.
> 
> The disadvantage:
> 
> - If the affected object was allocated/freed long before the bug happened
>   and the stack trace events were purged from the stack ring, the report
>   will have no stack traces.

Do you have statistics on how how likely this is? Maybe through
identifying what the average lifetime of an entry in the stack ring is?

How bad is this for very long lived objects (e.g. pagecache)?

> Discussion
> ==========
> 
> The current implementation of the stack ring uses a single ring buffer for
> the whole kernel. This might lead to contention due to atomic accesses to
> the ring buffer index on multicore systems.
> 
> It is unclear to me whether the performance impact from this contention
> is significant compared to the slowdown introduced by collecting stack
> traces.

I agree, but once stack trace collection becomes faster (per your future
plans below), this might need to be revisited.

> While these patches are being reviewed, I will do some tests on the arm64
> hardware that I have. However, I do not have a large multicore arm64
> system to do proper measurements.
> 
> A considered alternative is to keep a separate ring buffer for each CPU
> and then iterate over all of them when printing a bug report. This approach
> requires somehow figuring out which of the stack rings has the freshest
> stack traces for an object if multiple stack rings have them.
> 
> Further plans
> =============
> 
> This series is a part of an effort to make KASAN stack trace collection
> suitable for production. This requires stack trace collection to be fast
> and memory-bounded.
> 
> The planned steps are:
> 
> 1. Speed up stack trace collection (potentially, by using SCS;
>    patches on-hold until steps #2 and #3 are completed).
> 2. Keep stack trace handles in the stack ring (this series).
> 3. Add a memory-bounded mode to stack depot or provide an alternative
>    memory-bounded stack storage.
> 4. Potentially, implement stack trace collection sampling to minimize
>    the performance impact.
> 
> Thanks!
> 
> Andrey Konovalov (32):
>   kasan: check KASAN_NO_FREE_META in __kasan_metadata_size
>   kasan: rename kasan_set_*_info to kasan_save_*_info
>   kasan: move is_kmalloc check out of save_alloc_info
>   kasan: split save_alloc_info implementations
>   kasan: drop CONFIG_KASAN_TAGS_IDENTIFY
>   kasan: introduce kasan_print_aux_stacks
>   kasan: introduce kasan_get_alloc_track
>   kasan: introduce kasan_init_object_meta
>   kasan: clear metadata functions for tag-based modes
>   kasan: move kasan_get_*_meta to generic.c
>   kasan: introduce kasan_requires_meta
>   kasan: introduce kasan_init_cache_meta
>   kasan: drop CONFIG_KASAN_GENERIC check from kasan_init_cache_meta
>   kasan: only define kasan_metadata_size for Generic mode
>   kasan: only define kasan_never_merge for Generic mode
>   kasan: only define metadata offsets for Generic mode
>   kasan: only define metadata structs for Generic mode
>   kasan: only define kasan_cache_create for Generic mode
>   kasan: pass tagged pointers to kasan_save_alloc/free_info
>   kasan: move kasan_get_alloc/free_track definitions
>   kasan: simplify invalid-free reporting
>   kasan: cosmetic changes in report.c
>   kasan: use kasan_addr_to_slab in print_address_description
>   kasan: move kasan_addr_to_slab to common.c
>   kasan: make kasan_addr_to_page static
>   kasan: simplify print_report
>   kasan: introduce complete_report_info
>   kasan: fill in cache and object in complete_report_info
>   kasan: rework function arguments in report.c
>   kasan: introduce kasan_complete_mode_report_info
>   kasan: implement stack ring for tag-based modes
>   kasan: better identify bug types for tag-based modes

Let me go and review the patches now.

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 06/32] kasan: introduce kasan_print_aux_stacks
  2022-06-13 20:13 ` [PATCH 06/32] kasan: introduce kasan_print_aux_stacks andrey.konovalov
@ 2022-06-17 11:34   ` Marco Elver
  2022-07-18 22:41     ` Andrey Konovalov
  0 siblings, 1 reply; 54+ messages in thread
From: Marco Elver @ 2022-06-17 11:34 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, 13 Jun 2022 at 22:16, <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Add a kasan_print_aux_stacks() helper that prints the auxiliary stack
> traces for the Generic mode.
>
> This change hides references to alloc_meta from the common reporting code.
> This is desired as only the Generic mode will be using per-object metadata
> after this series.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  mm/kasan/kasan.h          |  6 ++++++
>  mm/kasan/report.c         | 15 +--------------
>  mm/kasan/report_generic.c | 20 ++++++++++++++++++++
>  3 files changed, 27 insertions(+), 14 deletions(-)
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index aa6b43936f8d..bcea5ed15631 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -265,6 +265,12 @@ void kasan_print_address_stack_frame(const void *addr);
>  static inline void kasan_print_address_stack_frame(const void *addr) { }
>  #endif
>
> +#ifdef CONFIG_KASAN_GENERIC
> +void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object);
> +#else
> +static inline void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object) { }
> +#endif

Why not put this into one of the existing "#ifdef
CONFIG_KASAN_GENERIC" blocks? There are several; probably the one 10
lines down might be ok?

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 19/32] kasan: pass tagged pointers to kasan_save_alloc/free_info
  2022-06-13 20:14 ` [PATCH 19/32] kasan: pass tagged pointers to kasan_save_alloc/free_info andrey.konovalov
@ 2022-06-20  9:54   ` Marco Elver
  2022-07-18 22:41     ` Andrey Konovalov
  0 siblings, 1 reply; 54+ messages in thread
From: Marco Elver @ 2022-06-20  9:54 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, Jun 13, 2022 at 10:14PM +0200, andrey.konovalov@linux.dev wrote:
> From: Andrey Konovalov <andreyknvl@google.com>
> 
> Pass tagged pointers to kasan_save_alloc/free_info().
> 
> This is a preparatory patch to simplify other changes in the series.
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  mm/kasan/common.c  | 4 ++--
>  mm/kasan/generic.c | 3 +--
>  mm/kasan/kasan.h   | 2 +-
>  mm/kasan/tags.c    | 3 +--
>  4 files changed, 5 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index f937b6c9e86a..519fd0b3040b 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -227,7 +227,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
>  		return false;
>  
>  	if (kasan_stack_collection_enabled())
> -		kasan_save_free_info(cache, object, tag);
> +		kasan_save_free_info(cache, tagged_object);
>  

Variable 'tag' becomes unused in this function after this patch.

>  	return kasan_quarantine_put(cache, object);
>  }
> @@ -316,7 +316,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
>  
>  	/* Save alloc info (if possible) for non-kmalloc() allocations. */
>  	if (kasan_stack_collection_enabled() && !cache->kasan_info.is_kmalloc)
> -		kasan_save_alloc_info(cache, (void *)object, flags);
> +		kasan_save_alloc_info(cache, tagged_object, flags);
>  
>  	return tagged_object;
>  }
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index f6bef347de87..aff39af3c532 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -500,8 +500,7 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
>  		kasan_set_track(&alloc_meta->alloc_track, flags);
>  }
>  
> -void kasan_save_free_info(struct kmem_cache *cache,
> -				void *object, u8 tag)
> +void kasan_save_free_info(struct kmem_cache *cache, void *object)
>  {
>  	struct kasan_free_meta *free_meta;
>  
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 30ec9ebf52c3..e8329935fbfb 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -308,7 +308,7 @@ static inline void kasan_init_object_meta(struct kmem_cache *cache, const void *
>  depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
>  void kasan_set_track(struct kasan_track *track, gfp_t flags);
>  void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
> -void kasan_save_free_info(struct kmem_cache *cache, void *object, u8 tag);
> +void kasan_save_free_info(struct kmem_cache *cache, void *object);
>  struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
>  						void *object);
>  struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
> diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> index 4f24669085e9..fd11d10a4ffc 100644
> --- a/mm/kasan/tags.c
> +++ b/mm/kasan/tags.c
> @@ -21,8 +21,7 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
>  {
>  }
>  
> -void kasan_save_free_info(struct kmem_cache *cache,
> -				void *object, u8 tag)
> +void kasan_save_free_info(struct kmem_cache *cache, void *object)
>  {
>  }
>  
> -- 
> 2.25.1
> 
> -- 
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/9363b16202fb04a3223de714e70b7a6b72c4367e.1655150842.git.andreyknvl%40google.com.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 31/32] kasan: implement stack ring for tag-based modes
  2022-06-13 20:14 ` [PATCH 31/32] kasan: implement stack ring for tag-based modes andrey.konovalov
@ 2022-06-20 13:35   ` Marco Elver
  2022-07-18 22:42     ` Andrey Konovalov
  0 siblings, 1 reply; 54+ messages in thread
From: Marco Elver @ 2022-06-20 13:35 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, Jun 13, 2022 at 10:14PM +0200, andrey.konovalov@linux.dev wrote:
> From: Andrey Konovalov <andreyknvl@google.com>
> 
> Implement storing stack depot handles for alloc/free stack traces for
> slab objects for the tag-based KASAN modes in a ring buffer.
> 
> This ring buffer is referred to as the stack ring.
> 
> On each alloc/free of a slab object, the tagged address of the object and
> the current stack trace are recorded in the stack ring.
> 
> On each bug report, if the accessed address belongs to a slab object, the
> stack ring is scanned for matching entries. The newest entries are used to
> print the alloc/free stack traces in the report: one entry for alloc and
> one for free.
> 
> The ring buffer is lock-free.
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> 
> ---
> 
> The number of entries in the stack ring is fixed in this version of the
> patch. We could either implement it as a config option or a command-line
> argument. I tilt towards the latter option and will implement it in v2
> unless there are objections.

Yes, that'd be good, along with just not allocating if no stacktraces
are requested per kasan.stacktrace=.

> ---
>  mm/kasan/kasan.h       | 20 ++++++++++++++
>  mm/kasan/report_tags.c | 61 ++++++++++++++++++++++++++++++++++++++++++
>  mm/kasan/tags.c        | 30 +++++++++++++++++++++
>  3 files changed, 111 insertions(+)
> 
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index c51cea31ced0..da9a3c56ef4b 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -2,6 +2,7 @@
>  #ifndef __MM_KASAN_KASAN_H
>  #define __MM_KASAN_KASAN_H
>  
> +#include <linux/atomic.h>
>  #include <linux/kasan.h>
>  #include <linux/kasan-tags.h>
>  #include <linux/kfence.h>
> @@ -227,6 +228,25 @@ struct kasan_free_meta {
>  
>  #endif /* CONFIG_KASAN_GENERIC */
>  
> +#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
> +
> +struct kasan_stack_ring_entry {
> +	atomic64_t ptr;		/* void * */
> +	atomic64_t size;	/* size_t */
> +	atomic_t pid;		/* u32 */
> +	atomic_t stack;		/* depot_stack_handle_t */
> +	atomic_t is_free;	/* bool */

Per comments below, consider making these non-atomic.

> +};
> +
> +#define KASAN_STACK_RING_ENTRIES (32 << 10)
> +
> +struct kasan_stack_ring {
> +	atomic64_t pos;
> +	struct kasan_stack_ring_entry entries[KASAN_STACK_RING_ENTRIES];
> +};
> +
> +#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
> +
>  #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
>  /* Used in KUnit-compatible KASAN tests. */
>  struct kunit_kasan_status {
> diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
> index 5cbac2cdb177..21911d1883d3 100644
> --- a/mm/kasan/report_tags.c
> +++ b/mm/kasan/report_tags.c
> @@ -4,8 +4,12 @@
>   * Copyright (c) 2020 Google, Inc.
>   */
>  
> +#include <linux/atomic.h>
> +
>  #include "kasan.h"
>  
> +extern struct kasan_stack_ring stack_ring;
> +
>  static const char *get_bug_type(struct kasan_report_info *info)
>  {
>  	/*
> @@ -24,5 +28,62 @@ static const char *get_bug_type(struct kasan_report_info *info)
>  
>  void kasan_complete_mode_report_info(struct kasan_report_info *info)
>  {
> +	u64 pos;
> +	struct kasan_stack_ring_entry *entry;
> +	void *object;
> +	u32 pid;
> +	depot_stack_handle_t stack;
> +	bool is_free;

If you switch away from atomic for kasan_stack_ring_entry members, you
can just replace the above with a 'struct kasan_stack_ring_entry' and
READ_ONCE() each entry into it below.

> +	bool alloc_found = false, free_found = false;
> +
>  	info->bug_type = get_bug_type(info);
> +
> +	if (!info->cache || !info->object)
> +		return;
> +
> +	pos = atomic64_read(&stack_ring.pos);
> +
> +	for (u64 i = pos - 1; i != pos - 1 - KASAN_STACK_RING_ENTRIES; i--) {
> +		if (alloc_found && free_found)
> +			break;
> +
> +		entry = &stack_ring.entries[i % KASAN_STACK_RING_ENTRIES];
> +
> +		/* Paired with atomic64_set_release() in save_stack_info(). */
> +		object = (void *)atomic64_read_acquire(&entry->ptr);
> +
> +		if (kasan_reset_tag(object) != info->object ||
> +		    get_tag(object) != get_tag(info->access_addr))
> +			continue;
> +
> +		pid = atomic_read(&entry->pid);
> +		stack = atomic_read(&entry->stack);
> +		is_free = atomic_read(&entry->is_free);
> +
> +		/* Try detecting if the entry was changed while being read. */
> +		smp_mb();
> +		if (object != (void *)atomic64_read(&entry->ptr))
> +			continue;

What if the object was changed, but 'ptr' is the same? It might very
well be possible to then read half of the info of the previous object,
and half of the new object (e.g. pid is old, stack is new).

Is the assumption that it is extremely unlikely that this will happen
where 1) address is the same, and 2) tags are the same? And if it does
happen, it is unlikely that there'll be a bug on that address?

It might be worth stating this in comments.

Another thing is, if there's a bug, but concurrently you have tons of
allocations/frees that change the ring's entries at a very high rate,
how likely is it that the entire ring will have been wiped before the
entry of interest is found again?

One way to guard against this is to prevent modifications of the ring
while the ring is searched. This could be implemented with a
percpu-rwsem, which is almost free for read-lockers but very expensive
for write-lockers. Insertions only acquire a read-lock, but on a bug
when searching the ring, you have to acquire a write-lock. Although you
currently take the contention hit for incrementing 'pos', so a plain
rwlock might also be ok.

It would be good to understand the probabilities of these corner cases
with some average to worst case workloads, and optimize based on that.

> +
> +		if (is_free) {
> +			/*
> +			 * Second free of the same object.
> +			 * Give up on trying to find the alloc entry.
> +			 */
> +			if (free_found)
> +				break;
> +
> +			info->free_track.pid = pid;
> +			info->free_track.stack = stack;
> +			free_found = true;
> +		} else {
> +			/* Second alloc of the same object. Give up. */
> +			if (alloc_found)
> +				break;
> +
> +			info->alloc_track.pid = pid;
> +			info->alloc_track.stack = stack;
> +			alloc_found = true;
> +		}
> +	}
>  }
> diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> index 39a0481e5228..286011307695 100644
> --- a/mm/kasan/tags.c
> +++ b/mm/kasan/tags.c
> @@ -6,6 +6,7 @@
>   * Copyright (c) 2020 Google, Inc.
>   */
>  
> +#include <linux/atomic.h>
>  #include <linux/init.h>
>  #include <linux/kasan.h>
>  #include <linux/kernel.h>
> @@ -16,11 +17,40 @@
>  #include <linux/types.h>
>  
>  #include "kasan.h"
> +#include "../slab.h"
> +
> +struct kasan_stack_ring stack_ring;

This is a very large struct. Can it be allocated by memblock_alloc()
very early on only if required (kasan.stacktrace= can still switch it
off, right?).

> +void save_stack_info(struct kmem_cache *cache, void *object,
> +			gfp_t flags, bool is_free)

static void save_stack_info(...)

> +{
> +	u64 pos;
> +	struct kasan_stack_ring_entry *entry;
> +	depot_stack_handle_t stack;
> +
> +	stack = kasan_save_stack(flags, true);
> +
> +	pos = atomic64_fetch_add(1, &stack_ring.pos);
> +	entry = &stack_ring.entries[pos % KASAN_STACK_RING_ENTRIES];
> +
> +	atomic64_set(&entry->size, cache->object_size);
> +	atomic_set(&entry->pid, current->pid);
> +	atomic_set(&entry->stack, stack);
> +	atomic_set(&entry->is_free, is_free);
> +

I don't see the point of these being atomic. You can make them normal
variables with the proper types, and use READ_ONCE() / WRITE_ONCE().

The only one where you truly need the atomic type is 'pos'.

> +	/*
> +	 * Paired with atomic64_read_acquire() in
> +	 * kasan_complete_mode_report_info().
> +	 */
> +	atomic64_set_release(&entry->ptr, (s64)object);

This could be smp_store_release() and 'ptr' can be just a normal pointer.

One thing that is not entirely impossible though (vs. re-reading same
pointer but inconsistent fields I mentioned above), is if something
wants to write to the ring, but stalls for a very long time before the
release of 'ptr', giving 'pos' the chance to wrap around and another
writer writing the same entry. Something like:

  T0					| T1
  --------------------------------------+--------------------------------
  WRITE_ONCE(entry->size, ..) 		| 
  WRITE_ONCE(entry->pid, ..)		| 
					| WRITE_ONCE(entry->size, ..)
					| WRITE_ONCE(entry->pid, ..)
  					| WRITE_ONCE(entry->stack, ..)
  					| WRITE_ONCE(entry->is_free, ..)
  					| smp_store_release(entry->ptr, ...)
  WRITE_ONCE(entry->stack, ..)		|
  WRITE_ONCE(entry->is_free, ..)	|
  smp_store_release(entry->ptr, ...)	|

Which results in some mix of T0's and T1's data.

The way to solve this is to implement a try-lock using 'ptr':

	#define BUSY_PTR ((void*)1)  // non-zero because initial values are 0
	old_ptr = READ_ONCE(entry->ptr);
	if (old_ptr == BUSY_PTR)
		goto next; /* Busy slot. */
	if (!try_cmpxchg(&entry->ptr, &old_ptr, BUSY_PTR))
		goto next; /* Busy slot. */
	... set fields as before ...
	smp_store_release(&entry->ptr, object);

> +}
>  
>  void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
>  {
> +	save_stack_info(cache, object, flags, false);
>  }
>  
>  void kasan_save_free_info(struct kmem_cache *cache, void *object)
>  {
> +	save_stack_info(cache, object, GFP_NOWAIT, true);
>  }
> -- 
> 2.25.1

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 01/32] kasan: check KASAN_NO_FREE_META in __kasan_metadata_size
  2022-06-13 20:13 ` [PATCH 01/32] kasan: check KASAN_NO_FREE_META in __kasan_metadata_size andrey.konovalov
@ 2022-06-20 13:39   ` Marco Elver
  0 siblings, 0 replies; 54+ messages in thread
From: Marco Elver @ 2022-06-20 13:39 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, 13 Jun 2022 at 22:15, <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> __kasan_metadata_size() calculates the size of the redzone for objects
> in a slab cache.
>
> When accounting for presence of kasan_free_meta in the redzone, this
> function only compares free_meta_offset with 0. But free_meta_offset could
> also be equal to KASAN_NO_FREE_META, which indicates that kasan_free_meta
> is not present at all.
>
> Add a comparison with KASAN_NO_FREE_META into __kasan_metadata_size().
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Reviewed-by: Marco Elver <elver@google.com>


> ---
>
> This is a minor fix that only affects slub_debug runs, so it is probably
> not worth backporting.
> ---
>  mm/kasan/common.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index c40c0e7b3b5f..968d2365d8c1 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -223,8 +223,9 @@ size_t __kasan_metadata_size(struct kmem_cache *cache)
>                 return 0;
>         return (cache->kasan_info.alloc_meta_offset ?
>                 sizeof(struct kasan_alloc_meta) : 0) +
> -               (cache->kasan_info.free_meta_offset ?
> -               sizeof(struct kasan_free_meta) : 0);
> +               ((cache->kasan_info.free_meta_offset &&
> +                 cache->kasan_info.free_meta_offset != KASAN_NO_FREE_META) ?
> +                sizeof(struct kasan_free_meta) : 0);
>  }
>
>  struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/91406e5f2a1c0a1fddfc4e7f17df22fda852591c.1655150842.git.andreyknvl%40google.com.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 02/32] kasan: rename kasan_set_*_info to kasan_save_*_info
  2022-06-13 20:13 ` [PATCH 02/32] kasan: rename kasan_set_*_info to kasan_save_*_info andrey.konovalov
@ 2022-06-20 13:39   ` Marco Elver
  0 siblings, 0 replies; 54+ messages in thread
From: Marco Elver @ 2022-06-20 13:39 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, 13 Jun 2022 at 22:15, <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Rename set_alloc_info() and kasan_set_free_info() to save_alloc_info()
> and kasan_save_free_info(). The new names make more sense.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Reviewed-by: Marco Elver <elver@google.com>


> ---
>  mm/kasan/common.c  | 8 ++++----
>  mm/kasan/generic.c | 2 +-
>  mm/kasan/kasan.h   | 2 +-
>  mm/kasan/tags.c    | 2 +-
>  4 files changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 968d2365d8c1..753775b894b6 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -364,7 +364,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
>                 return false;
>
>         if (kasan_stack_collection_enabled())
> -               kasan_set_free_info(cache, object, tag);
> +               kasan_save_free_info(cache, object, tag);
>
>         return kasan_quarantine_put(cache, object);
>  }
> @@ -423,7 +423,7 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
>         }
>  }
>
> -static void set_alloc_info(struct kmem_cache *cache, void *object,
> +static void save_alloc_info(struct kmem_cache *cache, void *object,
>                                 gfp_t flags, bool is_kmalloc)
>  {
>         struct kasan_alloc_meta *alloc_meta;
> @@ -467,7 +467,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
>
>         /* Save alloc info (if possible) for non-kmalloc() allocations. */
>         if (kasan_stack_collection_enabled())
> -               set_alloc_info(cache, (void *)object, flags, false);
> +               save_alloc_info(cache, (void *)object, flags, false);
>
>         return tagged_object;
>  }
> @@ -513,7 +513,7 @@ static inline void *____kasan_kmalloc(struct kmem_cache *cache,
>          * This also rewrites the alloc info when called from kasan_krealloc().
>          */
>         if (kasan_stack_collection_enabled())
> -               set_alloc_info(cache, (void *)object, flags, true);
> +               save_alloc_info(cache, (void *)object, flags, true);
>
>         /* Keep the tag that was set by kasan_slab_alloc(). */
>         return (void *)object;
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index 437fcc7e77cf..03a3770cfeae 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -358,7 +358,7 @@ void kasan_record_aux_stack_noalloc(void *addr)
>         return __kasan_record_aux_stack(addr, false);
>  }
>
> -void kasan_set_free_info(struct kmem_cache *cache,
> +void kasan_save_free_info(struct kmem_cache *cache,
>                                 void *object, u8 tag)
>  {
>         struct kasan_free_meta *free_meta;
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 610d60d6e5b8..6df8d7b01073 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -284,7 +284,7 @@ struct slab *kasan_addr_to_slab(const void *addr);
>
>  depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
>  void kasan_set_track(struct kasan_track *track, gfp_t flags);
> -void kasan_set_free_info(struct kmem_cache *cache, void *object, u8 tag);
> +void kasan_save_free_info(struct kmem_cache *cache, void *object, u8 tag);
>  struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
>                                 void *object, u8 tag);
>
> diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> index 8f48b9502a17..b453a353bc86 100644
> --- a/mm/kasan/tags.c
> +++ b/mm/kasan/tags.c
> @@ -17,7 +17,7 @@
>
>  #include "kasan.h"
>
> -void kasan_set_free_info(struct kmem_cache *cache,
> +void kasan_save_free_info(struct kmem_cache *cache,
>                                 void *object, u8 tag)
>  {
>         struct kasan_alloc_meta *alloc_meta;
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/50cdd8e8d696a8958b7b59c940561c6ed8042436.1655150842.git.andreyknvl%40google.com.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 03/32] kasan: move is_kmalloc check out of save_alloc_info
  2022-06-13 20:13 ` [PATCH 03/32] kasan: move is_kmalloc check out of save_alloc_info andrey.konovalov
@ 2022-06-20 13:39   ` Marco Elver
  0 siblings, 0 replies; 54+ messages in thread
From: Marco Elver @ 2022-06-20 13:39 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, 13 Jun 2022 at 22:15, <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Move kasan_info.is_kmalloc check out of save_alloc_info().
>
> This is a preparatory change that simplifies the following patches
> in this series.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Reviewed-by: Marco Elver <elver@google.com>


> ---
>  mm/kasan/common.c | 15 +++++----------
>  1 file changed, 5 insertions(+), 10 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 753775b894b6..a6107e8375e0 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -423,15 +423,10 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
>         }
>  }
>
> -static void save_alloc_info(struct kmem_cache *cache, void *object,
> -                               gfp_t flags, bool is_kmalloc)
> +static void save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
>  {
>         struct kasan_alloc_meta *alloc_meta;
>
> -       /* Don't save alloc info for kmalloc caches in kasan_slab_alloc(). */
> -       if (cache->kasan_info.is_kmalloc && !is_kmalloc)
> -               return;
> -
>         alloc_meta = kasan_get_alloc_meta(cache, object);
>         if (alloc_meta)
>                 kasan_set_track(&alloc_meta->alloc_track, flags);
> @@ -466,8 +461,8 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
>         kasan_unpoison(tagged_object, cache->object_size, init);
>
>         /* Save alloc info (if possible) for non-kmalloc() allocations. */
> -       if (kasan_stack_collection_enabled())
> -               save_alloc_info(cache, (void *)object, flags, false);
> +       if (kasan_stack_collection_enabled() && !cache->kasan_info.is_kmalloc)
> +               save_alloc_info(cache, (void *)object, flags);
>
>         return tagged_object;
>  }
> @@ -512,8 +507,8 @@ static inline void *____kasan_kmalloc(struct kmem_cache *cache,
>          * Save alloc info (if possible) for kmalloc() allocations.
>          * This also rewrites the alloc info when called from kasan_krealloc().
>          */
> -       if (kasan_stack_collection_enabled())
> -               save_alloc_info(cache, (void *)object, flags, true);
> +       if (kasan_stack_collection_enabled() && cache->kasan_info.is_kmalloc)
> +               save_alloc_info(cache, (void *)object, flags);
>
>         /* Keep the tag that was set by kasan_slab_alloc(). */
>         return (void *)object;
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/ad7b6cfa3fbe10d2d9c4d15a9d30c2db9a41362c.1655150842.git.andreyknvl%40google.com.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 04/32] kasan: split save_alloc_info implementations
  2022-06-13 20:13 ` [PATCH 04/32] kasan: split save_alloc_info implementations andrey.konovalov
@ 2022-06-20 13:39   ` Marco Elver
  0 siblings, 0 replies; 54+ messages in thread
From: Marco Elver @ 2022-06-20 13:39 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, 13 Jun 2022 at 22:15, <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Provide standalone implementations of save_alloc_info() for the Generic
> and tag-based modes.
>
> For now, the implementations are the same, but they will diverge later
> in the series.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Reviewed-by: Marco Elver <elver@google.com>


> ---
>  mm/kasan/common.c  | 13 ++-----------
>  mm/kasan/generic.c |  9 +++++++++
>  mm/kasan/kasan.h   |  1 +
>  mm/kasan/tags.c    |  9 +++++++++
>  4 files changed, 21 insertions(+), 11 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index a6107e8375e0..2848c7a2402a 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -423,15 +423,6 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
>         }
>  }
>
> -static void save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> -{
> -       struct kasan_alloc_meta *alloc_meta;
> -
> -       alloc_meta = kasan_get_alloc_meta(cache, object);
> -       if (alloc_meta)
> -               kasan_set_track(&alloc_meta->alloc_track, flags);
> -}
> -
>  void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
>                                         void *object, gfp_t flags, bool init)
>  {
> @@ -462,7 +453,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
>
>         /* Save alloc info (if possible) for non-kmalloc() allocations. */
>         if (kasan_stack_collection_enabled() && !cache->kasan_info.is_kmalloc)
> -               save_alloc_info(cache, (void *)object, flags);
> +               kasan_save_alloc_info(cache, (void *)object, flags);
>
>         return tagged_object;
>  }
> @@ -508,7 +499,7 @@ static inline void *____kasan_kmalloc(struct kmem_cache *cache,
>          * This also rewrites the alloc info when called from kasan_krealloc().
>          */
>         if (kasan_stack_collection_enabled() && cache->kasan_info.is_kmalloc)
> -               save_alloc_info(cache, (void *)object, flags);
> +               kasan_save_alloc_info(cache, (void *)object, flags);
>
>         /* Keep the tag that was set by kasan_slab_alloc(). */
>         return (void *)object;
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index 03a3770cfeae..98c451a3b01f 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -358,6 +358,15 @@ void kasan_record_aux_stack_noalloc(void *addr)
>         return __kasan_record_aux_stack(addr, false);
>  }
>
> +void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> +{
> +       struct kasan_alloc_meta *alloc_meta;
> +
> +       alloc_meta = kasan_get_alloc_meta(cache, object);
> +       if (alloc_meta)
> +               kasan_set_track(&alloc_meta->alloc_track, flags);
> +}
> +
>  void kasan_save_free_info(struct kmem_cache *cache,
>                                 void *object, u8 tag)
>  {
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 6df8d7b01073..610057e651d2 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -284,6 +284,7 @@ struct slab *kasan_addr_to_slab(const void *addr);
>
>  depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
>  void kasan_set_track(struct kasan_track *track, gfp_t flags);
> +void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
>  void kasan_save_free_info(struct kmem_cache *cache, void *object, u8 tag);
>  struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
>                                 void *object, u8 tag);
> diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> index b453a353bc86..1ba3c8399f72 100644
> --- a/mm/kasan/tags.c
> +++ b/mm/kasan/tags.c
> @@ -17,6 +17,15 @@
>
>  #include "kasan.h"
>
> +void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> +{
> +       struct kasan_alloc_meta *alloc_meta;
> +
> +       alloc_meta = kasan_get_alloc_meta(cache, object);
> +       if (alloc_meta)
> +               kasan_set_track(&alloc_meta->alloc_track, flags);
> +}
> +
>  void kasan_save_free_info(struct kmem_cache *cache,
>                                 void *object, u8 tag)
>  {
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/ae1389c0717d1875077ee3f6cd4beb5b7e046ae0.1655150842.git.andreyknvl%40google.com.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 05/32] kasan: drop CONFIG_KASAN_TAGS_IDENTIFY
  2022-06-13 20:13 ` [PATCH 05/32] kasan: drop CONFIG_KASAN_TAGS_IDENTIFY andrey.konovalov
@ 2022-06-20 13:39   ` Marco Elver
  0 siblings, 0 replies; 54+ messages in thread
From: Marco Elver @ 2022-06-20 13:39 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, 13 Jun 2022 at 22:15, <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Drop CONFIG_KASAN_TAGS_IDENTIFY and related code to simplify making
> changes to the reporting code.
>
> The dropped functionality will be restored in the following patches in
> this series.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Reviewed-by: Marco Elver <elver@google.com>


> ---
>  lib/Kconfig.kasan      |  8 --------
>  mm/kasan/kasan.h       | 12 +-----------
>  mm/kasan/report_tags.c | 28 ----------------------------
>  mm/kasan/tags.c        | 21 ++-------------------
>  4 files changed, 3 insertions(+), 66 deletions(-)
>
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index f0973da583e0..ca09b1cf8ee9 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -167,14 +167,6 @@ config KASAN_STACK
>           as well, as it adds inline-style instrumentation that is run
>           unconditionally.
>
> -config KASAN_TAGS_IDENTIFY
> -       bool "Memory corruption type identification"
> -       depends on KASAN_SW_TAGS || KASAN_HW_TAGS
> -       help
> -         Enables best-effort identification of the bug types (use-after-free
> -         or out-of-bounds) at the cost of increased memory consumption.
> -         Only applicable for the tag-based KASAN modes.
> -
>  config KASAN_VMALLOC
>         bool "Check accesses to vmalloc allocations"
>         depends on HAVE_ARCH_KASAN_VMALLOC
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 610057e651d2..aa6b43936f8d 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -168,23 +168,13 @@ struct kasan_track {
>         depot_stack_handle_t stack;
>  };
>
> -#if defined(CONFIG_KASAN_TAGS_IDENTIFY) && defined(CONFIG_KASAN_SW_TAGS)
> -#define KASAN_NR_FREE_STACKS 5
> -#else
> -#define KASAN_NR_FREE_STACKS 1
> -#endif
> -
>  struct kasan_alloc_meta {
>         struct kasan_track alloc_track;
>         /* Generic mode stores free track in kasan_free_meta. */
>  #ifdef CONFIG_KASAN_GENERIC
>         depot_stack_handle_t aux_stack[2];
>  #else
> -       struct kasan_track free_track[KASAN_NR_FREE_STACKS];
> -#endif
> -#ifdef CONFIG_KASAN_TAGS_IDENTIFY
> -       u8 free_pointer_tag[KASAN_NR_FREE_STACKS];
> -       u8 free_track_idx;
> +       struct kasan_track free_track;
>  #endif
>  };
>
> diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
> index e25d2166e813..35cf3cae4aa4 100644
> --- a/mm/kasan/report_tags.c
> +++ b/mm/kasan/report_tags.c
> @@ -5,37 +5,9 @@
>   */
>
>  #include "kasan.h"
> -#include "../slab.h"
>
>  const char *kasan_get_bug_type(struct kasan_report_info *info)
>  {
> -#ifdef CONFIG_KASAN_TAGS_IDENTIFY
> -       struct kasan_alloc_meta *alloc_meta;
> -       struct kmem_cache *cache;
> -       struct slab *slab;
> -       const void *addr;
> -       void *object;
> -       u8 tag;
> -       int i;
> -
> -       tag = get_tag(info->access_addr);
> -       addr = kasan_reset_tag(info->access_addr);
> -       slab = kasan_addr_to_slab(addr);
> -       if (slab) {
> -               cache = slab->slab_cache;
> -               object = nearest_obj(cache, slab, (void *)addr);
> -               alloc_meta = kasan_get_alloc_meta(cache, object);
> -
> -               if (alloc_meta) {
> -                       for (i = 0; i < KASAN_NR_FREE_STACKS; i++) {
> -                               if (alloc_meta->free_pointer_tag[i] == tag)
> -                                       return "use-after-free";
> -                       }
> -               }
> -               return "out-of-bounds";
> -       }
> -#endif
> -
>         /*
>          * If access_size is a negative number, then it has reason to be
>          * defined as out-of-bounds bug type.
> diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> index 1ba3c8399f72..e0e5de8ce834 100644
> --- a/mm/kasan/tags.c
> +++ b/mm/kasan/tags.c
> @@ -30,39 +30,22 @@ void kasan_save_free_info(struct kmem_cache *cache,
>                                 void *object, u8 tag)
>  {
>         struct kasan_alloc_meta *alloc_meta;
> -       u8 idx = 0;
>
>         alloc_meta = kasan_get_alloc_meta(cache, object);
>         if (!alloc_meta)
>                 return;
>
> -#ifdef CONFIG_KASAN_TAGS_IDENTIFY
> -       idx = alloc_meta->free_track_idx;
> -       alloc_meta->free_pointer_tag[idx] = tag;
> -       alloc_meta->free_track_idx = (idx + 1) % KASAN_NR_FREE_STACKS;
> -#endif
> -
> -       kasan_set_track(&alloc_meta->free_track[idx], GFP_NOWAIT);
> +       kasan_set_track(&alloc_meta->free_track, GFP_NOWAIT);
>  }
>
>  struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
>                                 void *object, u8 tag)
>  {
>         struct kasan_alloc_meta *alloc_meta;
> -       int i = 0;
>
>         alloc_meta = kasan_get_alloc_meta(cache, object);
>         if (!alloc_meta)
>                 return NULL;
>
> -#ifdef CONFIG_KASAN_TAGS_IDENTIFY
> -       for (i = 0; i < KASAN_NR_FREE_STACKS; i++) {
> -               if (alloc_meta->free_pointer_tag[i] == tag)
> -                       break;
> -       }
> -       if (i == KASAN_NR_FREE_STACKS)
> -               i = alloc_meta->free_track_idx;
> -#endif
> -
> -       return &alloc_meta->free_track[i];
> +       return &alloc_meta->free_track;
>  }
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/cfc1744f4a5eb6f50eddee53238af1a2fb4e8583.1655150842.git.andreyknvl%40google.com.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 07/32] kasan: introduce kasan_get_alloc_track
  2022-06-13 20:13 ` [PATCH 07/32] kasan: introduce kasan_get_alloc_track andrey.konovalov
@ 2022-06-20 13:39   ` Marco Elver
  0 siblings, 0 replies; 54+ messages in thread
From: Marco Elver @ 2022-06-20 13:39 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, 13 Jun 2022 at 22:16, <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Add a kasan_get_alloc_track() helper that fetches alloc_track for a slab
> object and use this helper in the common reporting code.
>
> For now, the implementations of this helper are the same for the Generic
> and tag-based modes, but they will diverge later in the series.
>
> This change hides references to alloc_meta from the common reporting code.
> This is desired as only the Generic mode will be using per-object metadata
> after this series.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Reviewed-by: Marco Elver <elver@google.com>


> ---
>  mm/kasan/generic.c | 14 +++++++++++++-
>  mm/kasan/kasan.h   |  4 +++-
>  mm/kasan/report.c  |  8 ++++----
>  mm/kasan/tags.c    | 14 +++++++++++++-
>  4 files changed, 33 insertions(+), 7 deletions(-)
>
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index 98c451a3b01f..f212b9ae57b5 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -381,8 +381,20 @@ void kasan_save_free_info(struct kmem_cache *cache,
>         *(u8 *)kasan_mem_to_shadow(object) = KASAN_SLAB_FREETRACK;
>  }
>
> +struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
> +                                               void *object)
> +{
> +       struct kasan_alloc_meta *alloc_meta;
> +
> +       alloc_meta = kasan_get_alloc_meta(cache, object);
> +       if (!alloc_meta)
> +               return NULL;
> +
> +       return &alloc_meta->alloc_track;
> +}
> +
>  struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
> -                               void *object, u8 tag)
> +                                               void *object, u8 tag)
>  {
>         if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_SLAB_FREETRACK)
>                 return NULL;
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index bcea5ed15631..4005da62a1e1 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -282,8 +282,10 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
>  void kasan_set_track(struct kasan_track *track, gfp_t flags);
>  void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
>  void kasan_save_free_info(struct kmem_cache *cache, void *object, u8 tag);
> +struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
> +                                               void *object);
>  struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
> -                               void *object, u8 tag);
> +                                               void *object, u8 tag);
>
>  #if defined(CONFIG_KASAN_GENERIC) && \
>         (defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 35dd8aeb115c..f951fd39db74 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -251,12 +251,12 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
>  static void describe_object_stacks(struct kmem_cache *cache, void *object,
>                                         const void *addr, u8 tag)
>  {
> -       struct kasan_alloc_meta *alloc_meta;
> +       struct kasan_track *alloc_track;
>         struct kasan_track *free_track;
>
> -       alloc_meta = kasan_get_alloc_meta(cache, object);
> -       if (alloc_meta) {
> -               print_track(&alloc_meta->alloc_track, "Allocated");
> +       alloc_track = kasan_get_alloc_track(cache, object);
> +       if (alloc_track) {
> +               print_track(alloc_track, "Allocated");
>                 pr_err("\n");
>         }
>
> diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> index e0e5de8ce834..7b1fc8e7c99c 100644
> --- a/mm/kasan/tags.c
> +++ b/mm/kasan/tags.c
> @@ -38,8 +38,20 @@ void kasan_save_free_info(struct kmem_cache *cache,
>         kasan_set_track(&alloc_meta->free_track, GFP_NOWAIT);
>  }
>
> +struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
> +                                               void *object)
> +{
> +       struct kasan_alloc_meta *alloc_meta;
> +
> +       alloc_meta = kasan_get_alloc_meta(cache, object);
> +       if (!alloc_meta)
> +               return NULL;
> +
> +       return &alloc_meta->alloc_track;
> +}
> +
>  struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
> -                               void *object, u8 tag)
> +                                               void *object, u8 tag)
>  {
>         struct kasan_alloc_meta *alloc_meta;
>
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/184ac9df81406e73611e1f639c5d4d09f8d7693a.1655150842.git.andreyknvl%40google.com.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 08/32] kasan: introduce kasan_init_object_meta
  2022-06-13 20:13 ` [PATCH 08/32] kasan: introduce kasan_init_object_meta andrey.konovalov
@ 2022-06-20 13:39   ` Marco Elver
  0 siblings, 0 replies; 54+ messages in thread
From: Marco Elver @ 2022-06-20 13:39 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, 13 Jun 2022 at 22:16, <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Add a kasan_init_object_meta() helper that initializes metadata for a slab
> object and use it in the common code.
>
> For now, the implementations of this helper are the same for the Generic
> and tag-based modes, but they will diverge later in the series.
>
> This change hides references to alloc_meta from the common code. This is
> desired as only the Generic mode will be using per-object metadata after
> this series.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Reviewed-by: Marco Elver <elver@google.com>


> ---
>  mm/kasan/common.c  | 10 +++-------
>  mm/kasan/generic.c |  9 +++++++++
>  mm/kasan/kasan.h   |  2 ++
>  mm/kasan/tags.c    |  9 +++++++++
>  4 files changed, 23 insertions(+), 7 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 2848c7a2402a..f0ee1c1b4b3c 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -312,13 +312,9 @@ static inline u8 assign_tag(struct kmem_cache *cache,
>  void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
>                                                 const void *object)
>  {
> -       struct kasan_alloc_meta *alloc_meta;
> -
> -       if (kasan_stack_collection_enabled()) {
> -               alloc_meta = kasan_get_alloc_meta(cache, object);
> -               if (alloc_meta)
> -                       __memset(alloc_meta, 0, sizeof(*alloc_meta));
> -       }
> +       /* Initialize per-object metadata if it is present. */
> +       if (kasan_stack_collection_enabled())
> +               kasan_init_object_meta(cache, object);
>
>         /* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */
>         object = set_tag(object, assign_tag(cache, object, true));
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index f212b9ae57b5..5462ddbc21e6 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -328,6 +328,15 @@ DEFINE_ASAN_SET_SHADOW(f3);
>  DEFINE_ASAN_SET_SHADOW(f5);
>  DEFINE_ASAN_SET_SHADOW(f8);
>
> +void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
> +{
> +       struct kasan_alloc_meta *alloc_meta;
> +
> +       alloc_meta = kasan_get_alloc_meta(cache, object);
> +       if (alloc_meta)
> +               __memset(alloc_meta, 0, sizeof(*alloc_meta));
> +}
> +
>  static void __kasan_record_aux_stack(void *addr, bool can_alloc)
>  {
>         struct slab *slab = kasan_addr_to_slab(addr);
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 4005da62a1e1..751c3b17749a 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -278,6 +278,8 @@ void kasan_report_invalid_free(void *object, unsigned long ip);
>  struct page *kasan_addr_to_page(const void *addr);
>  struct slab *kasan_addr_to_slab(const void *addr);
>
> +void kasan_init_object_meta(struct kmem_cache *cache, const void *object);
> +
>  depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
>  void kasan_set_track(struct kasan_track *track, gfp_t flags);
>  void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
> diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> index 7b1fc8e7c99c..2e200969a4b8 100644
> --- a/mm/kasan/tags.c
> +++ b/mm/kasan/tags.c
> @@ -17,6 +17,15 @@
>
>  #include "kasan.h"
>
> +void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
> +{
> +       struct kasan_alloc_meta *alloc_meta;
> +
> +       alloc_meta = kasan_get_alloc_meta(cache, object);
> +       if (alloc_meta)
> +               __memset(alloc_meta, 0, sizeof(*alloc_meta));
> +}
> +
>  void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
>  {
>         struct kasan_alloc_meta *alloc_meta;
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/8d1cf94238a325e441f684cbdbb2a1da0db78add.1655150842.git.andreyknvl%40google.com.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 09/32] kasan: clear metadata functions for tag-based modes
  2022-06-13 20:14 ` [PATCH 09/32] kasan: clear metadata functions for tag-based modes andrey.konovalov
@ 2022-06-20 13:39   ` Marco Elver
  0 siblings, 0 replies; 54+ messages in thread
From: Marco Elver @ 2022-06-20 13:39 UTC (permalink / raw)
  To: andrey.konovalov
  Cc: Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Mon, 13 Jun 2022 at 22:16, <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Remove implementations of the metadata-related functions for the tag-based
> modes.
>
> The following patches in the series will provide alternative
> implementations.
>
> As of this patch, the tag-based modes no longer collect alloc and free
> stack traces. This functionality will be restored later in the series.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Reviewed-by: Marco Elver <elver@google.com>


> ---
>  mm/kasan/tags.c | 33 ++-------------------------------
>  1 file changed, 2 insertions(+), 31 deletions(-)
>
> diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> index 2e200969a4b8..f11c89505c77 100644
> --- a/mm/kasan/tags.c
> +++ b/mm/kasan/tags.c
> @@ -19,54 +19,25 @@
>
>  void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
>  {
> -       struct kasan_alloc_meta *alloc_meta;
> -
> -       alloc_meta = kasan_get_alloc_meta(cache, object);
> -       if (alloc_meta)
> -               __memset(alloc_meta, 0, sizeof(*alloc_meta));
>  }
>
>  void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
>  {
> -       struct kasan_alloc_meta *alloc_meta;
> -
> -       alloc_meta = kasan_get_alloc_meta(cache, object);
> -       if (alloc_meta)
> -               kasan_set_track(&alloc_meta->alloc_track, flags);
>  }
>
>  void kasan_save_free_info(struct kmem_cache *cache,
>                                 void *object, u8 tag)
>  {
> -       struct kasan_alloc_meta *alloc_meta;
> -
> -       alloc_meta = kasan_get_alloc_meta(cache, object);
> -       if (!alloc_meta)
> -               return;
> -
> -       kasan_set_track(&alloc_meta->free_track, GFP_NOWAIT);
>  }
>
>  struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
>                                                 void *object)
>  {
> -       struct kasan_alloc_meta *alloc_meta;
> -
> -       alloc_meta = kasan_get_alloc_meta(cache, object);
> -       if (!alloc_meta)
> -               return NULL;
> -
> -       return &alloc_meta->alloc_track;
> +       return NULL;
>  }
>
>  struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
>                                                 void *object, u8 tag)
>  {
> -       struct kasan_alloc_meta *alloc_meta;
> -
> -       alloc_meta = kasan_get_alloc_meta(cache, object);
> -       if (!alloc_meta)
> -               return NULL;
> -
> -       return &alloc_meta->free_track;
> +       return NULL;
>  }
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/db6ce7b46d47aa26056e9eae5c2aa49a3160a566.1655150842.git.andreyknvl%40google.com.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 21/32] kasan: simplify invalid-free reporting
  2022-06-13 20:14 ` [PATCH 21/32] kasan: simplify invalid-free reporting andrey.konovalov
@ 2022-06-21  7:17   ` Kuan-Ying Lee
  2022-07-12 20:38     ` Andrey Konovalov
  0 siblings, 1 reply; 54+ messages in thread
From: Kuan-Ying Lee @ 2022-06-21  7:17 UTC (permalink / raw)
  To: andrey.konovalov, Marco Elver, Alexander Potapenko
  Cc: Andrey Konovalov, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Peter Collingbourne, Evgenii Stepanov, Florian Mayer,
	Andrew Morton, linux-mm, linux-kernel, Andrey Konovalov

On Tue, 2022-06-14 at 04:14 +0800, andrey.konovalov@linux.dev wrote:
> From: Andrey Konovalov <andreyknvl@google.com>
> 
> Right now, KASAN uses the kasan_report_type enum to describe report
> types.
> 
> As this enum only has two options, replace it with a bool variable.
> 
> Also, unify printing report header for invalid-free and other bug
> types
> in print_error_description().
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  mm/kasan/kasan.h  |  7 +------
>  mm/kasan/report.c | 16 +++++++---------
>  2 files changed, 8 insertions(+), 15 deletions(-)
> 
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index e8329935fbfb..f696d50b09fb 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -146,16 +146,11 @@ static inline bool kasan_requires_meta(void)
>  #define META_MEM_BYTES_PER_ROW (META_BYTES_PER_ROW *
> KASAN_GRANULE_SIZE)
>  #define META_ROWS_AROUND_ADDR 2
> 
> -enum kasan_report_type {
> -       KASAN_REPORT_ACCESS,
> -       KASAN_REPORT_INVALID_FREE,
> -};
> -
>  struct kasan_report_info {
> -       enum kasan_report_type type;
>         void *access_addr;
>         void *first_bad_addr;
>         size_t access_size;
> +       bool is_free;
>         bool is_write;
>         unsigned long ip;
>  };
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index f951fd39db74..7269b6249488 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -175,14 +175,12 @@ static void end_report(unsigned long *flags,
> void *addr)
> 

Hi Andrey,

Do we need to distinguish "double free" case from "invalid free" or
we just print "double-free or invalid-free"?

I sent a patch[1] to separate double free case from invalid
free last week and I saw it has been merged into akpm tree.

[1] 
https://lore.kernel.org/linux-mm/20220615062219.22618-1-Kuan-Ying.Lee@mediatek.com/

Thanks,
Kuan-Ying Lee

>  static void print_error_description(struct kasan_report_info *info)
>  {
> -       if (info->type == KASAN_REPORT_INVALID_FREE) {
> -               pr_err("BUG: KASAN: double-free or invalid-free in
> %pS\n",
> -                      (void *)info->ip);
> -               return;
> -       }
> +       const char *bug_type = info->is_free ?
> +               "double-free or invalid-free" :
> kasan_get_bug_type(info);
> 
> -       pr_err("BUG: KASAN: %s in %pS\n",
> -               kasan_get_bug_type(info), (void *)info->ip);
> +       pr_err("BUG: KASAN: %s in %pS\n", bug_type, (void *)info-
> >ip);
> +       if (info->is_free)
> +               return;
>         if (info->access_size)
>                 pr_err("%s of size %zu at addr %px by task %s/%d\n",
>                         info->is_write ? "Write" : "Read", info-
> >access_size,
> @@ -435,11 +433,11 @@ void kasan_report_invalid_free(void *ptr,
> unsigned long ip)
> 
>         start_report(&flags, true);
> 
> -       info.type = KASAN_REPORT_INVALID_FREE;
>         info.access_addr = ptr;
>         info.first_bad_addr = kasan_reset_tag(ptr);
>         info.access_size = 0;
>         info.is_write = false;
> +       info.is_free = true;
>         info.ip = ip;
> 
>         print_report(&info);
> @@ -468,11 +466,11 @@ bool kasan_report(unsigned long addr, size_t
> size, bool is_write,
> 
>         start_report(&irq_flags, true);
> 
> -       info.type = KASAN_REPORT_ACCESS;
>         info.access_addr = ptr;
>         info.first_bad_addr = kasan_find_first_bad_addr(ptr, size);
>         info.access_size = size;
>         info.is_write = is_write;
> +       info.is_free = false;
>         info.ip = ip;
> 
>         print_report(&info);
> --
> 2.25.1
> 
> 


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 21/32] kasan: simplify invalid-free reporting
  2022-06-21  7:17   ` Kuan-Ying Lee
@ 2022-07-12 20:38     ` Andrey Konovalov
  0 siblings, 0 replies; 54+ messages in thread
From: Andrey Konovalov @ 2022-07-12 20:38 UTC (permalink / raw)
  To: Kuan-Ying Lee
  Cc: andrey.konovalov, Marco Elver, Alexander Potapenko,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

On Tue, Jun 21, 2022 at 9:17 AM Kuan-Ying Lee
<Kuan-Ying.Lee@mediatek.com> wrote:
>
> On Tue, 2022-06-14 at 04:14 +0800, andrey.konovalov@linux.dev wrote:
> > From: Andrey Konovalov <andreyknvl@google.com>
> >
> > Right now, KASAN uses the kasan_report_type enum to describe report
> > types.
> >
> > As this enum only has two options, replace it with a bool variable.
> >
> > Also, unify printing report header for invalid-free and other bug
> > types
> > in print_error_description().
> >
> > Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> > ---
> >  mm/kasan/kasan.h  |  7 +------
> >  mm/kasan/report.c | 16 +++++++---------
> >  2 files changed, 8 insertions(+), 15 deletions(-)
> >
> > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> > index e8329935fbfb..f696d50b09fb 100644
> > --- a/mm/kasan/kasan.h
> > +++ b/mm/kasan/kasan.h
> > @@ -146,16 +146,11 @@ static inline bool kasan_requires_meta(void)
> >  #define META_MEM_BYTES_PER_ROW (META_BYTES_PER_ROW *
> > KASAN_GRANULE_SIZE)
> >  #define META_ROWS_AROUND_ADDR 2
> >
> > -enum kasan_report_type {
> > -       KASAN_REPORT_ACCESS,
> > -       KASAN_REPORT_INVALID_FREE,
> > -};
> > -
> >  struct kasan_report_info {
> > -       enum kasan_report_type type;
> >         void *access_addr;
> >         void *first_bad_addr;
> >         size_t access_size;
> > +       bool is_free;
> >         bool is_write;
> >         unsigned long ip;
> >  };
> > diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> > index f951fd39db74..7269b6249488 100644
> > --- a/mm/kasan/report.c
> > +++ b/mm/kasan/report.c
> > @@ -175,14 +175,12 @@ static void end_report(unsigned long *flags,
> > void *addr)
> >
>
> Hi Andrey,
>
> Do we need to distinguish "double free" case from "invalid free" or
> we just print "double-free or invalid-free"?
>
> I sent a patch[1] to separate double free case from invalid
> free last week and I saw it has been merged into akpm tree.
>
> [1]
> https://lore.kernel.org/linux-mm/20220615062219.22618-1-Kuan-Ying.Lee@mediatek.com/

Hi Kuan-Ying,

Yes, thank you for the patch! I will rebase my series onto it.

Thanks!

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata
  2022-06-17  9:32 ` [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata Marco Elver
@ 2022-07-18 22:41   ` Andrey Konovalov
  0 siblings, 0 replies; 54+ messages in thread
From: Andrey Konovalov @ 2022-07-18 22:41 UTC (permalink / raw)
  To: Marco Elver
  Cc: andrey.konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton,
	Linux Memory Management List, LKML, Andrey Konovalov

On Fri, Jun 17, 2022 at 11:32 AM Marco Elver <elver@google.com> wrote:
>
> > The disadvantage:
> >
> > - If the affected object was allocated/freed long before the bug happened
> >   and the stack trace events were purged from the stack ring, the report
> >   will have no stack traces.
>
> Do you have statistics on how how likely this is? Maybe through
> identifying what the average lifetime of an entry in the stack ring is?
>
> How bad is this for very long lived objects (e.g. pagecache)?

I ran a test on Pixel 6: the stack ring of size (32 << 10) gets fully
rewritten every ~2.7 seconds during boot. Any buggy object that is
allocated/freed and then accessed with a bigger time span will not
have stack traces.

This can be dealt with by increasing the stack ring size, but this
comes down to how much memory one is willing to allocate for the stack
ring. If we decide to use sampling (saving stack traces only for every
Nth object), that will affect this too.

But any object that is allocated once during boot will be purged out
of the stack ring sooner or later. One could argue that such objects
are usually allocated at a single know place, so have a stack trace
won't considerably improve the report.

I would say that we need to deploy some solution, study the reports,
and adjust the implementation based on that.

> > Discussion
> > ==========
> >
> > The current implementation of the stack ring uses a single ring buffer for
> > the whole kernel. This might lead to contention due to atomic accesses to
> > the ring buffer index on multicore systems.
> >
> > It is unclear to me whether the performance impact from this contention
> > is significant compared to the slowdown introduced by collecting stack
> > traces.
>
> I agree, but once stack trace collection becomes faster (per your future
> plans below), this might need to be revisited.

Ack.

Thanks!

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 06/32] kasan: introduce kasan_print_aux_stacks
  2022-06-17 11:34   ` Marco Elver
@ 2022-07-18 22:41     ` Andrey Konovalov
  0 siblings, 0 replies; 54+ messages in thread
From: Andrey Konovalov @ 2022-07-18 22:41 UTC (permalink / raw)
  To: Marco Elver
  Cc: andrey.konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton,
	Linux Memory Management List, LKML, Andrey Konovalov

On Fri, Jun 17, 2022 at 1:35 PM Marco Elver <elver@google.com> wrote:
>
> > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> > index aa6b43936f8d..bcea5ed15631 100644
> > --- a/mm/kasan/kasan.h
> > +++ b/mm/kasan/kasan.h
> > @@ -265,6 +265,12 @@ void kasan_print_address_stack_frame(const void *addr);
> >  static inline void kasan_print_address_stack_frame(const void *addr) { }
> >  #endif
> >
> > +#ifdef CONFIG_KASAN_GENERIC
> > +void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object);
> > +#else
> > +static inline void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object) { }
> > +#endif
>
> Why not put this into one of the existing "#ifdef
> CONFIG_KASAN_GENERIC" blocks? There are several; probably the one 10
> lines down might be ok?

The idea was to group functions based on their purpose, not on which
mode uses them. Here, kasan_print_aux_stacks() is related to printing
reports, so it goes next to other such functions. We could rework the
order of functions in this file, but I'd rather keep it as is in this
change. Thanks!

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 19/32] kasan: pass tagged pointers to kasan_save_alloc/free_info
  2022-06-20  9:54   ` Marco Elver
@ 2022-07-18 22:41     ` Andrey Konovalov
  0 siblings, 0 replies; 54+ messages in thread
From: Andrey Konovalov @ 2022-07-18 22:41 UTC (permalink / raw)
  To: Marco Elver
  Cc: andrey.konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton,
	Linux Memory Management List, LKML, Andrey Konovalov

On Mon, Jun 20, 2022 at 11:54 AM Marco Elver <elver@google.com> wrote:
>
> On Mon, Jun 13, 2022 at 10:14PM +0200, andrey.konovalov@linux.dev wrote:
> > From: Andrey Konovalov <andreyknvl@google.com>
> >
> > Pass tagged pointers to kasan_save_alloc/free_info().
> >
> > This is a preparatory patch to simplify other changes in the series.
> >
> > Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> > ---
> >  mm/kasan/common.c  | 4 ++--
> >  mm/kasan/generic.c | 3 +--
> >  mm/kasan/kasan.h   | 2 +-
> >  mm/kasan/tags.c    | 3 +--
> >  4 files changed, 5 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > index f937b6c9e86a..519fd0b3040b 100644
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -227,7 +227,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> >               return false;
> >
> >       if (kasan_stack_collection_enabled())
> > -             kasan_save_free_info(cache, object, tag);
> > +             kasan_save_free_info(cache, tagged_object);
> >
>
> Variable 'tag' becomes unused in this function after this patch.

Will fix in v2, thanks!

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 24/32] kasan: move kasan_addr_to_slab to common.c
  2022-06-15 13:27   ` kernel test robot
@ 2022-07-18 22:41       ` Andrey Konovalov
  0 siblings, 0 replies; 54+ messages in thread
From: Andrey Konovalov @ 2022-07-18 22:41 UTC (permalink / raw)
  To: kernel test robot
  Cc: andrey.konovalov, Marco Elver, Alexander Potapenko, kbuild-all,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton,
	Linux Memory Management List, LKML

On Wed, Jun 15, 2022 at 3:28 PM kernel test robot <lkp@intel.com> wrote:
>
> Hi,
>
> I love your patch! Perhaps something to improve:
>
> [auto build test WARNING on akpm-mm/mm-everything]
> [also build test WARNING on linus/master v5.19-rc2 next-20220615]
> [cannot apply to vbabka-slab/for-next]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/andrey-konovalov-linux-dev/kasan-switch-tag-based-modes-to-stack-ring-from-per-object-metadata/20220614-042239
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> config: s390-allyesconfig (https://download.01.org/0day-ci/archive/20220615/202206152134.sadCRvGk-lkp@intel.com/config)
> compiler: s390-linux-gcc (GCC) 11.3.0
> reproduce (this is a W=1 build):
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # https://github.com/intel-lab-lkp/linux/commit/b0b10a57b2d9a5e5ae5d7ca62046b9774df1a88f
>         git remote add linux-review https://github.com/intel-lab-lkp/linux
>         git fetch --no-tags linux-review andrey-konovalov-linux-dev/kasan-switch-tag-based-modes-to-stack-ring-from-per-object-metadata/20220614-042239
>         git checkout b0b10a57b2d9a5e5ae5d7ca62046b9774df1a88f
>         # save the config file
>         mkdir build_dir && cp config build_dir/.config
>         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.3.0 make.cross W=1 O=build_dir ARCH=s390 SHELL=/bin/bash mm/kasan/
>
> If you fix the issue, kindly add following tag where applicable
> Reported-by: kernel test robot <lkp@intel.com>
>
> All warnings (new ones prefixed by >>):
>
>    mm/kasan/common.c: In function 'kasan_addr_to_slab':
> >> mm/kasan/common.c:35:19: warning: ordered comparison of pointer with null pointer [-Wextra]
>       35 |         if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
>          |                   ^~
>    mm/kasan/common.c: In function '____kasan_slab_free':
>    mm/kasan/common.c:202:12: warning: variable 'tag' set but not used [-Wunused-but-set-variable]
>      202 |         u8 tag;
>          |            ^~~

Will fix both in v2. Thanks!

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 24/32] kasan: move kasan_addr_to_slab to common.c
@ 2022-07-18 22:41       ` Andrey Konovalov
  0 siblings, 0 replies; 54+ messages in thread
From: Andrey Konovalov @ 2022-07-18 22:41 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 2416 bytes --]

On Wed, Jun 15, 2022 at 3:28 PM kernel test robot <lkp@intel.com> wrote:
>
> Hi,
>
> I love your patch! Perhaps something to improve:
>
> [auto build test WARNING on akpm-mm/mm-everything]
> [also build test WARNING on linus/master v5.19-rc2 next-20220615]
> [cannot apply to vbabka-slab/for-next]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/andrey-konovalov-linux-dev/kasan-switch-tag-based-modes-to-stack-ring-from-per-object-metadata/20220614-042239
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> config: s390-allyesconfig (https://download.01.org/0day-ci/archive/20220615/202206152134.sadCRvGk-lkp(a)intel.com/config)
> compiler: s390-linux-gcc (GCC) 11.3.0
> reproduce (this is a W=1 build):
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # https://github.com/intel-lab-lkp/linux/commit/b0b10a57b2d9a5e5ae5d7ca62046b9774df1a88f
>         git remote add linux-review https://github.com/intel-lab-lkp/linux
>         git fetch --no-tags linux-review andrey-konovalov-linux-dev/kasan-switch-tag-based-modes-to-stack-ring-from-per-object-metadata/20220614-042239
>         git checkout b0b10a57b2d9a5e5ae5d7ca62046b9774df1a88f
>         # save the config file
>         mkdir build_dir && cp config build_dir/.config
>         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.3.0 make.cross W=1 O=build_dir ARCH=s390 SHELL=/bin/bash mm/kasan/
>
> If you fix the issue, kindly add following tag where applicable
> Reported-by: kernel test robot <lkp@intel.com>
>
> All warnings (new ones prefixed by >>):
>
>    mm/kasan/common.c: In function 'kasan_addr_to_slab':
> >> mm/kasan/common.c:35:19: warning: ordered comparison of pointer with null pointer [-Wextra]
>       35 |         if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
>          |                   ^~
>    mm/kasan/common.c: In function '____kasan_slab_free':
>    mm/kasan/common.c:202:12: warning: variable 'tag' set but not used [-Wunused-but-set-variable]
>      202 |         u8 tag;
>          |            ^~~

Will fix both in v2. Thanks!

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH 31/32] kasan: implement stack ring for tag-based modes
  2022-06-20 13:35   ` Marco Elver
@ 2022-07-18 22:42     ` Andrey Konovalov
  0 siblings, 0 replies; 54+ messages in thread
From: Andrey Konovalov @ 2022-07-18 22:42 UTC (permalink / raw)
  To: Marco Elver
  Cc: andrey.konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Peter Collingbourne,
	Evgenii Stepanov, Florian Mayer, Andrew Morton,
	Linux Memory Management List, LKML, Andrey Konovalov

On Mon, Jun 20, 2022 at 3:35 PM Marco Elver <elver@google.com> wrote:
>
> > The number of entries in the stack ring is fixed in this version of the
> > patch. We could either implement it as a config option or a command-line
> > argument. I tilt towards the latter option and will implement it in v2
> > unless there are objections.
>
> Yes, that'd be good, along with just not allocating if no stacktraces
> are requested per kasan.stacktrace=.

Sounds good, will do in v2.

> > +struct kasan_stack_ring_entry {
> > +     atomic64_t ptr;         /* void * */
> > +     atomic64_t size;        /* size_t */
> > +     atomic_t pid;           /* u32 */
> > +     atomic_t stack;         /* depot_stack_handle_t */
> > +     atomic_t is_free;       /* bool */
>
> Per comments below, consider making these non-atomic.

Will do in v2.

> >  void kasan_complete_mode_report_info(struct kasan_report_info *info)
> >  {
> > +     u64 pos;
> > +     struct kasan_stack_ring_entry *entry;
> > +     void *object;
> > +     u32 pid;
> > +     depot_stack_handle_t stack;
> > +     bool is_free;
>
> If you switch away from atomic for kasan_stack_ring_entry members, you
> can just replace the above with a 'struct kasan_stack_ring_entry' and
> READ_ONCE() each entry into it below.

It would be a bit confusing to have two kasan_stack_ring_entry-based
variable in the function. I'll keep the current code if you don't
mind.

> > +     bool alloc_found = false, free_found = false;
> > +
> >       info->bug_type = get_bug_type(info);
> > +
> > +     if (!info->cache || !info->object)
> > +             return;
> > +
> > +     pos = atomic64_read(&stack_ring.pos);
> > +
> > +     for (u64 i = pos - 1; i != pos - 1 - KASAN_STACK_RING_ENTRIES; i--) {
> > +             if (alloc_found && free_found)
> > +                     break;
> > +
> > +             entry = &stack_ring.entries[i % KASAN_STACK_RING_ENTRIES];
> > +
> > +             /* Paired with atomic64_set_release() in save_stack_info(). */
> > +             object = (void *)atomic64_read_acquire(&entry->ptr);
> > +
> > +             if (kasan_reset_tag(object) != info->object ||
> > +                 get_tag(object) != get_tag(info->access_addr))
> > +                     continue;
> > +
> > +             pid = atomic_read(&entry->pid);
> > +             stack = atomic_read(&entry->stack);
> > +             is_free = atomic_read(&entry->is_free);
> > +
> > +             /* Try detecting if the entry was changed while being read. */
> > +             smp_mb();
> > +             if (object != (void *)atomic64_read(&entry->ptr))
> > +                     continue;
>
> What if the object was changed, but 'ptr' is the same? It might very
> well be possible to then read half of the info of the previous object,
> and half of the new object (e.g. pid is old, stack is new).
>
> Is the assumption that it is extremely unlikely that this will happen
> where 1) address is the same, and 2) tags are the same? And if it does
> happen, it is unlikely that there'll be a bug on that address?
>
> It might be worth stating this in comments.

This part will be removed in v2 due to the addition of an rwlock, but
I'll add a comment about the stack ring being best-effort anyway.

> Another thing is, if there's a bug, but concurrently you have tons of
> allocations/frees that change the ring's entries at a very high rate,
> how likely is it that the entire ring will have been wiped before the
> entry of interest is found again?
>
> One way to guard against this is to prevent modifications of the ring
> while the ring is searched. This could be implemented with a
> percpu-rwsem, which is almost free for read-lockers but very expensive
> for write-lockers. Insertions only acquire a read-lock, but on a bug
> when searching the ring, you have to acquire a write-lock. Although you
> currently take the contention hit for incrementing 'pos', so a plain
> rwlock might also be ok.

Will add an rwlock in v2.

> It would be good to understand the probabilities of these corner cases
> with some average to worst case workloads, and optimize based on that.

With the new synchronizations and checks added in v2, the only
problematic issue is when the stack ring overflows. Please see my
response to your cover letter comment wrt this.

> > +struct kasan_stack_ring stack_ring;
>
> This is a very large struct. Can it be allocated by memblock_alloc()
> very early on only if required (kasan.stacktrace= can still switch it
> off, right?).

Will do in v2.

> > +void save_stack_info(struct kmem_cache *cache, void *object,
> > +                     gfp_t flags, bool is_free)
>
> static void save_stack_info(...)

Right, will do in v2.

> > +{
> > +     u64 pos;
> > +     struct kasan_stack_ring_entry *entry;
> > +     depot_stack_handle_t stack;
> > +
> > +     stack = kasan_save_stack(flags, true);
> > +
> > +     pos = atomic64_fetch_add(1, &stack_ring.pos);
> > +     entry = &stack_ring.entries[pos % KASAN_STACK_RING_ENTRIES];
> > +
> > +     atomic64_set(&entry->size, cache->object_size);
> > +     atomic_set(&entry->pid, current->pid);
> > +     atomic_set(&entry->stack, stack);
> > +     atomic_set(&entry->is_free, is_free);
> > +
>
> I don't see the point of these being atomic. You can make them normal
> variables with the proper types, and use READ_ONCE() / WRITE_ONCE().
>
> The only one where you truly need the atomic type is 'pos'.

Will do in v2.

> > +     /*
> > +      * Paired with atomic64_read_acquire() in
> > +      * kasan_complete_mode_report_info().
> > +      */
> > +     atomic64_set_release(&entry->ptr, (s64)object);
>
> This could be smp_store_release() and 'ptr' can be just a normal pointer.

Will do in v2.

> One thing that is not entirely impossible though (vs. re-reading same
> pointer but inconsistent fields I mentioned above), is if something
> wants to write to the ring, but stalls for a very long time before the
> release of 'ptr', giving 'pos' the chance to wrap around and another
> writer writing the same entry. Something like:
>
>   T0                                    | T1
>   --------------------------------------+--------------------------------
>   WRITE_ONCE(entry->size, ..)           |
>   WRITE_ONCE(entry->pid, ..)            |
>                                         | WRITE_ONCE(entry->size, ..)
>                                         | WRITE_ONCE(entry->pid, ..)
>                                         | WRITE_ONCE(entry->stack, ..)
>                                         | WRITE_ONCE(entry->is_free, ..)
>                                         | smp_store_release(entry->ptr, ...)
>   WRITE_ONCE(entry->stack, ..)          |
>   WRITE_ONCE(entry->is_free, ..)        |
>   smp_store_release(entry->ptr, ...)    |
>
> Which results in some mix of T0's and T1's data.
>
> The way to solve this is to implement a try-lock using 'ptr':
>
>         #define BUSY_PTR ((void*)1)  // non-zero because initial values are 0
>         old_ptr = READ_ONCE(entry->ptr);
>         if (old_ptr == BUSY_PTR)
>                 goto next; /* Busy slot. */
>         if (!try_cmpxchg(&entry->ptr, &old_ptr, BUSY_PTR))
>                 goto next; /* Busy slot. */
>         ... set fields as before ...
>         smp_store_release(&entry->ptr, object);

Sounds good, will do in v2.

Thank you, Marco!

^ permalink raw reply	[flat|nested] 54+ messages in thread

end of thread, other threads:[~2022-07-18 22:42 UTC | newest]

Thread overview: 54+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-13 20:13 [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata andrey.konovalov
2022-06-13 20:13 ` [PATCH 01/32] kasan: check KASAN_NO_FREE_META in __kasan_metadata_size andrey.konovalov
2022-06-20 13:39   ` Marco Elver
2022-06-13 20:13 ` [PATCH 02/32] kasan: rename kasan_set_*_info to kasan_save_*_info andrey.konovalov
2022-06-20 13:39   ` Marco Elver
2022-06-13 20:13 ` [PATCH 03/32] kasan: move is_kmalloc check out of save_alloc_info andrey.konovalov
2022-06-20 13:39   ` Marco Elver
2022-06-13 20:13 ` [PATCH 04/32] kasan: split save_alloc_info implementations andrey.konovalov
2022-06-20 13:39   ` Marco Elver
2022-06-13 20:13 ` [PATCH 05/32] kasan: drop CONFIG_KASAN_TAGS_IDENTIFY andrey.konovalov
2022-06-20 13:39   ` Marco Elver
2022-06-13 20:13 ` [PATCH 06/32] kasan: introduce kasan_print_aux_stacks andrey.konovalov
2022-06-17 11:34   ` Marco Elver
2022-07-18 22:41     ` Andrey Konovalov
2022-06-13 20:13 ` [PATCH 07/32] kasan: introduce kasan_get_alloc_track andrey.konovalov
2022-06-20 13:39   ` Marco Elver
2022-06-13 20:13 ` [PATCH 08/32] kasan: introduce kasan_init_object_meta andrey.konovalov
2022-06-20 13:39   ` Marco Elver
2022-06-13 20:14 ` [PATCH 09/32] kasan: clear metadata functions for tag-based modes andrey.konovalov
2022-06-20 13:39   ` Marco Elver
2022-06-13 20:14 ` [PATCH 10/32] kasan: move kasan_get_*_meta to generic.c andrey.konovalov
2022-06-13 20:14 ` [PATCH 11/32] kasan: introduce kasan_requires_meta andrey.konovalov
2022-06-13 20:14 ` [PATCH 12/32] kasan: introduce kasan_init_cache_meta andrey.konovalov
2022-06-13 20:14 ` [PATCH 13/32] kasan: drop CONFIG_KASAN_GENERIC check from kasan_init_cache_meta andrey.konovalov
2022-06-13 20:14 ` [PATCH 14/32] kasan: only define kasan_metadata_size for Generic mode andrey.konovalov
2022-06-13 20:14 ` [PATCH 15/32] kasan: only define kasan_never_merge " andrey.konovalov
2022-06-13 20:14 ` [PATCH 16/32] kasan: only define metadata offsets " andrey.konovalov
2022-06-13 20:14 ` [PATCH 17/32] kasan: only define metadata structs " andrey.konovalov
2022-06-13 20:14 ` [PATCH 18/32] kasan: only define kasan_cache_create " andrey.konovalov
2022-06-13 20:14 ` [PATCH 19/32] kasan: pass tagged pointers to kasan_save_alloc/free_info andrey.konovalov
2022-06-20  9:54   ` Marco Elver
2022-07-18 22:41     ` Andrey Konovalov
2022-06-13 20:14 ` [PATCH 20/32] kasan: move kasan_get_alloc/free_track definitions andrey.konovalov
2022-06-13 20:14 ` [PATCH 21/32] kasan: simplify invalid-free reporting andrey.konovalov
2022-06-21  7:17   ` Kuan-Ying Lee
2022-07-12 20:38     ` Andrey Konovalov
2022-06-13 20:14 ` [PATCH 22/32] kasan: cosmetic changes in report.c andrey.konovalov
2022-06-13 20:14 ` [PATCH 23/32] kasan: use kasan_addr_to_slab in print_address_description andrey.konovalov
2022-06-13 20:14 ` [PATCH 24/32] kasan: move kasan_addr_to_slab to common.c andrey.konovalov
2022-06-15 13:27   ` kernel test robot
2022-07-18 22:41     ` Andrey Konovalov
2022-07-18 22:41       ` Andrey Konovalov
2022-06-13 20:14 ` [PATCH 25/32] kasan: make kasan_addr_to_page static andrey.konovalov
2022-06-13 20:14 ` [PATCH 26/32] kasan: simplify print_report andrey.konovalov
2022-06-13 20:14 ` [PATCH 27/32] kasan: introduce complete_report_info andrey.konovalov
2022-06-13 20:14 ` [PATCH 28/32] kasan: fill in cache and object in complete_report_info andrey.konovalov
2022-06-13 20:14 ` [PATCH 29/32] kasan: rework function arguments in report.c andrey.konovalov
2022-06-13 20:14 ` [PATCH 30/32] kasan: introduce kasan_complete_mode_report_info andrey.konovalov
2022-06-13 20:14 ` [PATCH 31/32] kasan: implement stack ring for tag-based modes andrey.konovalov
2022-06-20 13:35   ` Marco Elver
2022-07-18 22:42     ` Andrey Konovalov
2022-06-13 20:14 ` [PATCH 32/32] kasan: better identify bug types " andrey.konovalov
2022-06-17  9:32 ` [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata Marco Elver
2022-07-18 22:41   ` Andrey Konovalov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.