* [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS
@ 2021-02-01 19:43 Andrey Konovalov
2021-02-01 19:43 ` [PATCH 01/12] kasan, mm: don't save alloc stacks twice Andrey Konovalov
` (11 more replies)
0 siblings, 12 replies; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-01 19:43 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Andrey Konovalov, Kevin Brodsky, Will Deacon,
linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Peter Collingbourne,
Evgenii Stepanov
This patchset goes on top of:
1. Vincenzo's async support patches, and
2. "kasan: untag addresses for KFENCE" fix.
This patchset makes the HW_TAGS mode more efficient, mostly by reworking
poisoning approaches and simplifying/inlining some internal helpers.
With this change, the overhead of HW_TAGS annotations excluding setting
and checking memory tags is ~3%. The performance impact caused by tags
will be unknown until we have hardware that supports MTE.
As a side-effect, this patchset speeds up generic KASAN by ~15%.
Andrey Konovalov (12):
kasan, mm: don't save alloc stacks twice
kasan, mm: optimize kmalloc poisoning
kasan: optimize large kmalloc poisoning
kasan: clean up setting free info in kasan_slab_free
kasan: unify large kfree checks
kasan: rework krealloc tests
kasan, mm: remove krealloc side-effect
kasan, mm: optimize krealloc poisoning
kasan: ensure poisoning size alignment
arm64: kasan: simplify and inline MTE functions
kasan: always inline HW_TAGS helper functions
arm64: kasan: export MTE symbols for KASAN tests
arch/arm64/include/asm/cache.h | 1 -
arch/arm64/include/asm/kasan.h | 1 +
arch/arm64/include/asm/mte-def.h | 2 +
arch/arm64/include/asm/mte-kasan.h | 64 ++++++++--
arch/arm64/include/asm/mte.h | 2 -
arch/arm64/kernel/mte.c | 48 +-------
arch/arm64/lib/mte.S | 16 ---
include/linux/kasan.h | 25 ++--
lib/test_kasan.c | 111 +++++++++++++++--
mm/kasan/common.c | 187 ++++++++++++++++++++---------
mm/kasan/kasan.h | 74 +++++++++---
mm/kasan/shadow.c | 53 ++++----
mm/slab_common.c | 18 ++-
mm/slub.c | 3 +-
14 files changed, 419 insertions(+), 186 deletions(-)
--
2.30.0.365.g02bc693789-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH 01/12] kasan, mm: don't save alloc stacks twice
2021-02-01 19:43 [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS Andrey Konovalov
@ 2021-02-01 19:43 ` Andrey Konovalov
2021-02-02 16:06 ` Marco Elver
2021-02-01 19:43 ` [PATCH 02/12] kasan, mm: optimize kmalloc poisoning Andrey Konovalov
` (10 subsequent siblings)
11 siblings, 1 reply; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-01 19:43 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Andrey Konovalov, Kevin Brodsky, Will Deacon,
linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Peter Collingbourne,
Evgenii Stepanov
Currently KASAN saves allocation stacks in both kasan_slab_alloc() and
kasan_kmalloc() annotations. This patch changes KASAN to save allocation
stacks for slab objects from kmalloc caches in kasan_kmalloc() only,
and stacks for other slab objects in kasan_slab_alloc() only.
This change requires ____kasan_kmalloc() knowing whether the object
belongs to a kmalloc cache. This is implemented by adding a flag field
to the kasan_info structure. That flag is only set for kmalloc caches
via a new kasan_cache_create_kmalloc() annotation.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
include/linux/kasan.h | 9 +++++++++
mm/kasan/common.c | 18 ++++++++++++++----
mm/slab_common.c | 1 +
3 files changed, 24 insertions(+), 4 deletions(-)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 6d8f3227c264..2d5de4092185 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -83,6 +83,7 @@ static inline void kasan_disable_current(void) {}
struct kasan_cache {
int alloc_meta_offset;
int free_meta_offset;
+ bool is_kmalloc;
};
#ifdef CONFIG_KASAN_HW_TAGS
@@ -143,6 +144,13 @@ static __always_inline void kasan_cache_create(struct kmem_cache *cache,
__kasan_cache_create(cache, size, flags);
}
+void __kasan_cache_create_kmalloc(struct kmem_cache *cache);
+static __always_inline void kasan_cache_create_kmalloc(struct kmem_cache *cache)
+{
+ if (kasan_enabled())
+ __kasan_cache_create_kmalloc(cache);
+}
+
size_t __kasan_metadata_size(struct kmem_cache *cache);
static __always_inline size_t kasan_metadata_size(struct kmem_cache *cache)
{
@@ -278,6 +286,7 @@ static inline void kasan_free_pages(struct page *page, unsigned int order) {}
static inline void kasan_cache_create(struct kmem_cache *cache,
unsigned int *size,
slab_flags_t *flags) {}
+static inline void kasan_cache_create_kmalloc(struct kmem_cache *cache) {}
static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
static inline void kasan_poison_slab(struct page *page) {}
static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index fe852f3cfa42..374049564ea3 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -210,6 +210,11 @@ void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
*size = optimal_size;
}
+void __kasan_cache_create_kmalloc(struct kmem_cache *cache)
+{
+ cache->kasan_info.is_kmalloc = true;
+}
+
size_t __kasan_metadata_size(struct kmem_cache *cache)
{
if (!kasan_stack_collection_enabled())
@@ -394,17 +399,22 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
}
}
-static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
+static void set_alloc_info(struct kmem_cache *cache, void *object,
+ gfp_t flags, bool kmalloc)
{
struct kasan_alloc_meta *alloc_meta;
+ /* Don't save alloc info for kmalloc caches in kasan_slab_alloc(). */
+ if (cache->kasan_info.is_kmalloc && !kmalloc)
+ return;
+
alloc_meta = kasan_get_alloc_meta(cache, object);
if (alloc_meta)
kasan_set_track(&alloc_meta->alloc_track, flags);
}
static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
- size_t size, gfp_t flags, bool keep_tag)
+ size_t size, gfp_t flags, bool kmalloc)
{
unsigned long redzone_start;
unsigned long redzone_end;
@@ -423,7 +433,7 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
KASAN_GRANULE_SIZE);
redzone_end = round_up((unsigned long)object + cache->object_size,
KASAN_GRANULE_SIZE);
- tag = assign_tag(cache, object, false, keep_tag);
+ tag = assign_tag(cache, object, false, kmalloc);
/* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */
kasan_unpoison(set_tag(object, tag), size);
@@ -431,7 +441,7 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
KASAN_KMALLOC_REDZONE);
if (kasan_stack_collection_enabled())
- set_alloc_info(cache, (void *)object, flags);
+ set_alloc_info(cache, (void *)object, flags, kmalloc);
return set_tag(object, tag);
}
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 9aa3d2fe4c55..39d1a8ff9bb8 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -647,6 +647,7 @@ struct kmem_cache *__init create_kmalloc_cache(const char *name,
panic("Out of memory when creating slab %s\n", name);
create_boot_cache(s, name, size, flags, useroffset, usersize);
+ kasan_cache_create_kmalloc(s);
list_add(&s->list, &slab_caches);
s->refcount = 1;
return s;
--
2.30.0.365.g02bc693789-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 02/12] kasan, mm: optimize kmalloc poisoning
2021-02-01 19:43 [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS Andrey Konovalov
2021-02-01 19:43 ` [PATCH 01/12] kasan, mm: don't save alloc stacks twice Andrey Konovalov
@ 2021-02-01 19:43 ` Andrey Konovalov
2021-02-02 16:25 ` Marco Elver
2021-02-01 19:43 ` [PATCH 03/12] kasan: optimize large " Andrey Konovalov
` (9 subsequent siblings)
11 siblings, 1 reply; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-01 19:43 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Andrey Konovalov, Kevin Brodsky, Will Deacon,
linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Peter Collingbourne,
Evgenii Stepanov
For allocations from kmalloc caches, kasan_kmalloc() always follows
kasan_slab_alloc(). Currenly, both of them unpoison the whole object,
which is unnecessary.
This patch provides separate implementations for both annotations:
kasan_slab_alloc() unpoisons the whole object, and kasan_kmalloc()
only poisons the redzone.
For generic KASAN, the redzone start might not be aligned to
KASAN_GRANULE_SIZE. Therefore, the poisoning is split in two parts:
kasan_poison_last_granule() poisons the unaligned part, and then
kasan_poison() poisons the rest.
This patch also clarifies alignment guarantees of each of the poisoning
functions and drops the unnecessary round_up() call for redzone_end.
With this change, the early SLUB cache annotation needs to be changed to
kasan_slab_alloc(), as kasan_kmalloc() doesn't unpoison objects now.
The number of poisoned bytes for objects in this cache stays the same, as
kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node).
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
mm/kasan/common.c | 93 +++++++++++++++++++++++++++++++----------------
mm/kasan/kasan.h | 43 +++++++++++++++++++++-
mm/kasan/shadow.c | 28 +++++++-------
mm/slub.c | 3 +-
4 files changed, 119 insertions(+), 48 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 374049564ea3..128cb330ca73 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -278,21 +278,11 @@ void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
* based on objects indexes, so that objects that are next to each other
* get different tags.
*/
-static u8 assign_tag(struct kmem_cache *cache, const void *object,
- bool init, bool keep_tag)
+static u8 assign_tag(struct kmem_cache *cache, const void *object, bool init)
{
if (IS_ENABLED(CONFIG_KASAN_GENERIC))
return 0xff;
- /*
- * 1. When an object is kmalloc()'ed, two hooks are called:
- * kasan_slab_alloc() and kasan_kmalloc(). We assign the
- * tag only in the first one.
- * 2. We reuse the same tag for krealloc'ed objects.
- */
- if (keep_tag)
- return get_tag(object);
-
/*
* If the cache neither has a constructor nor has SLAB_TYPESAFE_BY_RCU
* set, assign a tag when the object is being allocated (init == false).
@@ -325,7 +315,7 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
}
/* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */
- object = set_tag(object, assign_tag(cache, object, true, false));
+ object = set_tag(object, assign_tag(cache, object, true));
return (void *)object;
}
@@ -413,12 +403,46 @@ static void set_alloc_info(struct kmem_cache *cache, void *object,
kasan_set_track(&alloc_meta->alloc_track, flags);
}
+void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
+ void *object, gfp_t flags)
+{
+ u8 tag;
+ void *tagged_object;
+
+ if (gfpflags_allow_blocking(flags))
+ kasan_quarantine_reduce();
+
+ if (unlikely(object == NULL))
+ return NULL;
+
+ if (is_kfence_address(object))
+ return (void *)object;
+
+ /*
+ * Generate and assign random tag for tag-based modes.
+ * Tag is ignored in set_tag() for the generic mode.
+ */
+ tag = assign_tag(cache, object, false);
+ tagged_object = set_tag(object, tag);
+
+ /*
+ * Unpoison the whole object.
+ * For kmalloc() allocations, kasan_kmalloc() will do precise poisoning.
+ */
+ kasan_unpoison(tagged_object, cache->object_size);
+
+ /* Save alloc info (if possible) for non-kmalloc() allocations. */
+ if (kasan_stack_collection_enabled())
+ set_alloc_info(cache, (void *)object, flags, false);
+
+ return tagged_object;
+}
+
static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
- size_t size, gfp_t flags, bool kmalloc)
+ size_t size, gfp_t flags)
{
unsigned long redzone_start;
unsigned long redzone_end;
- u8 tag;
if (gfpflags_allow_blocking(flags))
kasan_quarantine_reduce();
@@ -429,33 +453,41 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
if (is_kfence_address(kasan_reset_tag(object)))
return (void *)object;
+ /*
+ * The object has already been unpoisoned by kasan_slab_alloc() for
+ * kmalloc() or by ksize() for krealloc().
+ */
+
+ /*
+ * The redzone has byte-level precision for the generic mode.
+ * Partially poison the last object granule to cover the unaligned
+ * part of the redzone.
+ */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ kasan_poison_last_granule((void *)object, size);
+
+ /* Poison the aligned part of the redzone. */
redzone_start = round_up((unsigned long)(object + size),
KASAN_GRANULE_SIZE);
- redzone_end = round_up((unsigned long)object + cache->object_size,
- KASAN_GRANULE_SIZE);
- tag = assign_tag(cache, object, false, kmalloc);
-
- /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */
- kasan_unpoison(set_tag(object, tag), size);
+ redzone_end = (unsigned long)object + cache->object_size;
kasan_poison((void *)redzone_start, redzone_end - redzone_start,
KASAN_KMALLOC_REDZONE);
+ /*
+ * Save alloc info (if possible) for kmalloc() allocations.
+ * This also rewrites the alloc info when called from kasan_krealloc().
+ */
if (kasan_stack_collection_enabled())
- set_alloc_info(cache, (void *)object, flags, kmalloc);
+ set_alloc_info(cache, (void *)object, flags, true);
- return set_tag(object, tag);
-}
-
-void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
- void *object, gfp_t flags)
-{
- return ____kasan_kmalloc(cache, object, cache->object_size, flags, false);
+ /* Keep the tag that was set by kasan_slab_alloc(). */
+ return (void *)object;
}
void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object,
size_t size, gfp_t flags)
{
- return ____kasan_kmalloc(cache, object, size, flags, true);
+ return ____kasan_kmalloc(cache, object, size, flags);
}
EXPORT_SYMBOL(__kasan_kmalloc);
@@ -496,8 +528,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
if (unlikely(!PageSlab(page)))
return __kasan_kmalloc_large(object, size, flags);
else
- return ____kasan_kmalloc(page->slab_cache, object, size,
- flags, true);
+ return ____kasan_kmalloc(page->slab_cache, object, size, flags);
}
void __kasan_kfree_large(void *ptr, unsigned long ip)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index dd14e8870023..6a2882997f23 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -358,12 +358,51 @@ static inline bool kasan_byte_accessible(const void *addr)
#else /* CONFIG_KASAN_HW_TAGS */
-void kasan_poison(const void *address, size_t size, u8 value);
-void kasan_unpoison(const void *address, size_t size);
+/**
+ * kasan_poison - mark the memory range as unaccessible
+ * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
+ * @size - range size
+ * @value - value that's written to metadata for the range
+ *
+ * The size gets aligned to KASAN_GRANULE_SIZE before marking the range.
+ */
+void kasan_poison(const void *addr, size_t size, u8 value);
+
+/**
+ * kasan_unpoison - mark the memory range as accessible
+ * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
+ * @size - range size
+ *
+ * For the tag-based modes, the @size gets aligned to KASAN_GRANULE_SIZE before
+ * marking the range.
+ * For the generic mode, the last granule of the memory range gets partially
+ * unpoisoned based on the @size.
+ */
+void kasan_unpoison(const void *addr, size_t size);
+
bool kasan_byte_accessible(const void *addr);
#endif /* CONFIG_KASAN_HW_TAGS */
+#ifdef CONFIG_KASAN_GENERIC
+
+/**
+ * kasan_poison_last_granule - mark the last granule of the memory range as
+ * unaccessible
+ * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
+ * @size - range size
+ *
+ * This function is only available for the generic mode, as it's the only mode
+ * that has partially poisoned memory granules.
+ */
+void kasan_poison_last_granule(const void *address, size_t size);
+
+#else /* CONFIG_KASAN_GENERIC */
+
+static inline void kasan_poison_last_granule(const void *address, size_t size) { }
+
+#endif /* CONFIG_KASAN_GENERIC */
+
/*
* Exported functions for interfaces called from assembly or from generated
* code. Declarations here to avoid warning about missing declarations.
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 1372a2fc0ca9..1ed7817e4ee6 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -69,10 +69,6 @@ void *memcpy(void *dest, const void *src, size_t len)
return __memcpy(dest, src, len);
}
-/*
- * Poisons the shadow memory for 'size' bytes starting from 'addr'.
- * Memory addresses should be aligned to KASAN_GRANULE_SIZE.
- */
void kasan_poison(const void *address, size_t size, u8 value)
{
void *shadow_start, *shadow_end;
@@ -83,12 +79,12 @@ void kasan_poison(const void *address, size_t size, u8 value)
* addresses to this function.
*/
address = kasan_reset_tag(address);
- size = round_up(size, KASAN_GRANULE_SIZE);
/* Skip KFENCE memory if called explicitly outside of sl*b. */
if (is_kfence_address(address))
return;
+ size = round_up(size, KASAN_GRANULE_SIZE);
shadow_start = kasan_mem_to_shadow(address);
shadow_end = kasan_mem_to_shadow(address + size);
@@ -96,6 +92,16 @@ void kasan_poison(const void *address, size_t size, u8 value)
}
EXPORT_SYMBOL(kasan_poison);
+#ifdef CONFIG_KASAN_GENERIC
+void kasan_poison_last_granule(const void *address, size_t size)
+{
+ if (size & KASAN_GRANULE_MASK) {
+ u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
+ *shadow = size & KASAN_GRANULE_MASK;
+ }
+}
+#endif
+
void kasan_unpoison(const void *address, size_t size)
{
u8 tag = get_tag(address);
@@ -115,16 +121,12 @@ void kasan_unpoison(const void *address, size_t size)
if (is_kfence_address(address))
return;
+ /* Unpoison round_up(size, KASAN_GRANULE_SIZE) bytes. */
kasan_poison(address, size, tag);
- if (size & KASAN_GRANULE_MASK) {
- u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
-
- if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
- *shadow = tag;
- else /* CONFIG_KASAN_GENERIC */
- *shadow = size & KASAN_GRANULE_MASK;
- }
+ /* Partially poison the last granule for the generic mode. */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ kasan_poison_last_granule(address, size);
}
#ifdef CONFIG_MEMORY_HOTPLUG
diff --git a/mm/slub.c b/mm/slub.c
index 176b1cb0d006..e564008c2329 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3565,8 +3565,7 @@ static void early_kmem_cache_node_alloc(int node)
init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
init_tracking(kmem_cache_node, n);
#endif
- n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node),
- GFP_KERNEL);
+ n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL);
page->freelist = get_freepointer(kmem_cache_node, n);
page->inuse = 1;
page->frozen = 0;
--
2.30.0.365.g02bc693789-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 03/12] kasan: optimize large kmalloc poisoning
2021-02-01 19:43 [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS Andrey Konovalov
2021-02-01 19:43 ` [PATCH 01/12] kasan, mm: don't save alloc stacks twice Andrey Konovalov
2021-02-01 19:43 ` [PATCH 02/12] kasan, mm: optimize kmalloc poisoning Andrey Konovalov
@ 2021-02-01 19:43 ` Andrey Konovalov
2021-02-02 16:57 ` Marco Elver
2021-02-01 19:43 ` [PATCH 04/12] kasan: clean up setting free info in kasan_slab_free Andrey Konovalov
` (8 subsequent siblings)
11 siblings, 1 reply; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-01 19:43 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Andrey Konovalov, Kevin Brodsky, Will Deacon,
linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Peter Collingbourne,
Evgenii Stepanov
Similarly to kasan_kmalloc(), kasan_kmalloc_large() doesn't need
to unpoison the object as it as already unpoisoned by alloc_pages()
(or by ksize() for krealloc()).
This patch changes kasan_kmalloc_large() to only poison the redzone.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
mm/kasan/common.c | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 128cb330ca73..a7eb553c8e91 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -494,7 +494,6 @@ EXPORT_SYMBOL(__kasan_kmalloc);
void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
gfp_t flags)
{
- struct page *page;
unsigned long redzone_start;
unsigned long redzone_end;
@@ -504,12 +503,23 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
if (unlikely(ptr == NULL))
return NULL;
- page = virt_to_page(ptr);
+ /*
+ * The object has already been unpoisoned by kasan_alloc_pages() for
+ * alloc_pages() or by ksize() for krealloc().
+ */
+
+ /*
+ * The redzone has byte-level precision for the generic mode.
+ * Partially poison the last object granule to cover the unaligned
+ * part of the redzone.
+ */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ kasan_poison_last_granule(ptr, size);
+
+ /* Poison the aligned part of the redzone. */
redzone_start = round_up((unsigned long)(ptr + size),
KASAN_GRANULE_SIZE);
- redzone_end = (unsigned long)ptr + page_size(page);
-
- kasan_unpoison(ptr, size);
+ redzone_end = (unsigned long)ptr + page_size(virt_to_page(ptr));
kasan_poison((void *)redzone_start, redzone_end - redzone_start,
KASAN_PAGE_REDZONE);
--
2.30.0.365.g02bc693789-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 04/12] kasan: clean up setting free info in kasan_slab_free
2021-02-01 19:43 [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS Andrey Konovalov
` (2 preceding siblings ...)
2021-02-01 19:43 ` [PATCH 03/12] kasan: optimize large " Andrey Konovalov
@ 2021-02-01 19:43 ` Andrey Konovalov
2021-02-02 17:03 ` Marco Elver
2021-02-01 19:43 ` [PATCH 05/12] kasan: unify large kfree checks Andrey Konovalov
` (7 subsequent siblings)
11 siblings, 1 reply; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-01 19:43 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Andrey Konovalov, Kevin Brodsky, Will Deacon,
linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Peter Collingbourne,
Evgenii Stepanov
Put kasan_stack_collection_enabled() check and kasan_set_free_info()
calls next to each other.
The way this was previously implemented was a minor optimization that
relied of the the fact that kasan_stack_collection_enabled() is always
true for generic KASAN. The confusion that this brings outweights saving
a few instructions.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
mm/kasan/common.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a7eb553c8e91..086bb77292b6 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -350,13 +350,11 @@ static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
kasan_poison(object, cache->object_size, KASAN_KMALLOC_FREE);
- if (!kasan_stack_collection_enabled())
- return false;
-
if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine))
return false;
- kasan_set_free_info(cache, object, tag);
+ if (kasan_stack_collection_enabled())
+ kasan_set_free_info(cache, object, tag);
return kasan_quarantine_put(cache, object);
}
--
2.30.0.365.g02bc693789-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 05/12] kasan: unify large kfree checks
2021-02-01 19:43 [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS Andrey Konovalov
` (3 preceding siblings ...)
2021-02-01 19:43 ` [PATCH 04/12] kasan: clean up setting free info in kasan_slab_free Andrey Konovalov
@ 2021-02-01 19:43 ` Andrey Konovalov
2021-02-03 12:13 ` Marco Elver
2021-02-01 19:43 ` [PATCH 06/12] kasan: rework krealloc tests Andrey Konovalov
` (6 subsequent siblings)
11 siblings, 1 reply; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-01 19:43 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Andrey Konovalov, Kevin Brodsky, Will Deacon,
linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Peter Collingbourne,
Evgenii Stepanov
Unify checks in kasan_kfree_large() and in kasan_slab_free_mempool()
for large allocations as it's done for small kfree() allocations.
With this change, kasan_slab_free_mempool() starts checking that the
first byte of the memory that's being freed is accessible.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
include/linux/kasan.h | 16 ++++++++--------
mm/kasan/common.c | 36 ++++++++++++++++++++++++++----------
2 files changed, 34 insertions(+), 18 deletions(-)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 2d5de4092185..d53ea3c047bc 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -200,6 +200,13 @@ static __always_inline bool kasan_slab_free(struct kmem_cache *s, void *object)
return false;
}
+void __kasan_kfree_large(void *ptr, unsigned long ip);
+static __always_inline void kasan_kfree_large(void *ptr)
+{
+ if (kasan_enabled())
+ __kasan_kfree_large(ptr, _RET_IP_);
+}
+
void __kasan_slab_free_mempool(void *ptr, unsigned long ip);
static __always_inline void kasan_slab_free_mempool(void *ptr)
{
@@ -247,13 +254,6 @@ static __always_inline void * __must_check kasan_krealloc(const void *object,
return (void *)object;
}
-void __kasan_kfree_large(void *ptr, unsigned long ip);
-static __always_inline void kasan_kfree_large(void *ptr)
-{
- if (kasan_enabled())
- __kasan_kfree_large(ptr, _RET_IP_);
-}
-
/*
* Unlike kasan_check_read/write(), kasan_check_byte() is performed even for
* the hardware tag-based mode that doesn't rely on compiler instrumentation.
@@ -302,6 +302,7 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object)
{
return false;
}
+static inline void kasan_kfree_large(void *ptr) {}
static inline void kasan_slab_free_mempool(void *ptr) {}
static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
gfp_t flags)
@@ -322,7 +323,6 @@ static inline void *kasan_krealloc(const void *object, size_t new_size,
{
return (void *)object;
}
-static inline void kasan_kfree_large(void *ptr) {}
static inline bool kasan_check_byte(const void *address)
{
return true;
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 086bb77292b6..9c64a00bbf9c 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -364,6 +364,31 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
return ____kasan_slab_free(cache, object, ip, true);
}
+static bool ____kasan_kfree_large(void *ptr, unsigned long ip)
+{
+ if (ptr != page_address(virt_to_head_page(ptr))) {
+ kasan_report_invalid_free(ptr, ip);
+ return true;
+ }
+
+ if (!kasan_byte_accessible(ptr)) {
+ kasan_report_invalid_free(ptr, ip);
+ return true;
+ }
+
+ /*
+ * The object will be poisoned by kasan_free_pages() or
+ * kasan_slab_free_mempool().
+ */
+
+ return false;
+}
+
+void __kasan_kfree_large(void *ptr, unsigned long ip)
+{
+ ____kasan_kfree_large(ptr, ip);
+}
+
void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
{
struct page *page;
@@ -377,10 +402,8 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
* KMALLOC_MAX_SIZE, and kmalloc falls back onto page_alloc.
*/
if (unlikely(!PageSlab(page))) {
- if (ptr != page_address(page)) {
- kasan_report_invalid_free(ptr, ip);
+ if (____kasan_kfree_large(ptr, ip))
return;
- }
kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE);
} else {
____kasan_slab_free(page->slab_cache, ptr, ip, false);
@@ -539,13 +562,6 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
return ____kasan_kmalloc(page->slab_cache, object, size, flags);
}
-void __kasan_kfree_large(void *ptr, unsigned long ip)
-{
- if (ptr != page_address(virt_to_head_page(ptr)))
- kasan_report_invalid_free(ptr, ip);
- /* The object will be poisoned by kasan_free_pages(). */
-}
-
bool __kasan_check_byte(const void *address, unsigned long ip)
{
if (!kasan_byte_accessible(address)) {
--
2.30.0.365.g02bc693789-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 06/12] kasan: rework krealloc tests
2021-02-01 19:43 [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS Andrey Konovalov
` (4 preceding siblings ...)
2021-02-01 19:43 ` [PATCH 05/12] kasan: unify large kfree checks Andrey Konovalov
@ 2021-02-01 19:43 ` Andrey Konovalov
2021-02-03 14:48 ` Marco Elver
2021-02-01 19:43 ` [PATCH 07/12] kasan, mm: remove krealloc side-effect Andrey Konovalov
` (5 subsequent siblings)
11 siblings, 1 reply; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-01 19:43 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Andrey Konovalov, Kevin Brodsky, Will Deacon,
linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Peter Collingbourne,
Evgenii Stepanov
This patch reworks KASAN-KUnit tests for krealloc() to:
1. Check both slab and page_alloc based krealloc() implementations.
2. Allow at least one full granule to fit between old and new sizes for
each KASAN mode, and check accesses to that granule accordingly.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
lib/test_kasan.c | 91 ++++++++++++++++++++++++++++++++++++++++++------
1 file changed, 81 insertions(+), 10 deletions(-)
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 5699e43ca01b..2bb52853f341 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -258,11 +258,14 @@ static void kmalloc_large_oob_right(struct kunit *test)
kfree(ptr);
}
-static void kmalloc_oob_krealloc_more(struct kunit *test)
+static void krealloc_more_oob_helper(struct kunit *test,
+ size_t size1, size_t size2)
{
char *ptr1, *ptr2;
- size_t size1 = 17;
- size_t size2 = 19;
+ size_t middle;
+
+ KUNIT_ASSERT_LT(test, size1, size2);
+ middle = size1 + (size2 - size1) / 2;
ptr1 = kmalloc(size1, GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);
@@ -270,15 +273,31 @@ static void kmalloc_oob_krealloc_more(struct kunit *test)
ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
- KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size2 + OOB_TAG_OFF] = 'x');
+ /* All offsets up to size2 must be accessible. */
+ ptr2[size1 - 1] = 'x';
+ ptr2[size1] = 'x';
+ ptr2[middle] = 'x';
+ ptr2[size2 - 1] = 'x';
+
+ /* Generic mode is precise, so unaligned size2 must be inaccessible. */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size2] = 'x');
+
+ /* For all modes first aligned offset after size2 must be inaccessible. */
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ ptr2[round_up(size2, KASAN_GRANULE_SIZE)] = 'x');
+
kfree(ptr2);
}
-static void kmalloc_oob_krealloc_less(struct kunit *test)
+static void krealloc_less_oob_helper(struct kunit *test,
+ size_t size1, size_t size2)
{
char *ptr1, *ptr2;
- size_t size1 = 17;
- size_t size2 = 15;
+ size_t middle;
+
+ KUNIT_ASSERT_LT(test, size2, size1);
+ middle = size2 + (size1 - size2) / 2;
ptr1 = kmalloc(size1, GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);
@@ -286,10 +305,60 @@ static void kmalloc_oob_krealloc_less(struct kunit *test)
ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
- KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size2 + OOB_TAG_OFF] = 'x');
+ /* Must be accessible for all modes. */
+ ptr2[size2 - 1] = 'x';
+
+ /* Generic mode is precise, so unaligned size2 must be inaccessible. */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size2] = 'x');
+
+ /* For all modes first aligned offset after size2 must be inaccessible. */
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ ptr2[round_up(size2, KASAN_GRANULE_SIZE)] = 'x');
+
+ /*
+ * For all modes both middle and size1 should land in separate granules
+ * and thus be inaccessible.
+ */
+ KUNIT_EXPECT_LE(test, round_up(size2, KASAN_GRANULE_SIZE),
+ round_down(middle, KASAN_GRANULE_SIZE));
+ KUNIT_EXPECT_LE(test, round_up(middle, KASAN_GRANULE_SIZE),
+ round_down(size1, KASAN_GRANULE_SIZE));
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr2[middle] = 'x');
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size1 - 1] = 'x');
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size1] = 'x');
+
kfree(ptr2);
}
+static void krealloc_more_oob(struct kunit *test)
+{
+ krealloc_more_oob_helper(test, 201, 235);
+}
+
+static void krealloc_less_oob(struct kunit *test)
+{
+ krealloc_less_oob_helper(test, 235, 201);
+}
+
+static void krealloc_pagealloc_more_oob(struct kunit *test)
+{
+ /* page_alloc fallback in only implemented for SLUB. */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
+
+ krealloc_more_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 201,
+ KMALLOC_MAX_CACHE_SIZE + 235);
+}
+
+static void krealloc_pagealloc_less_oob(struct kunit *test)
+{
+ /* page_alloc fallback in only implemented for SLUB. */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
+
+ krealloc_less_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 235,
+ KMALLOC_MAX_CACHE_SIZE + 201);
+}
+
static void kmalloc_oob_16(struct kunit *test)
{
struct {
@@ -983,8 +1052,10 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(pagealloc_oob_right),
KUNIT_CASE(pagealloc_uaf),
KUNIT_CASE(kmalloc_large_oob_right),
- KUNIT_CASE(kmalloc_oob_krealloc_more),
- KUNIT_CASE(kmalloc_oob_krealloc_less),
+ KUNIT_CASE(krealloc_more_oob),
+ KUNIT_CASE(krealloc_less_oob),
+ KUNIT_CASE(krealloc_pagealloc_more_oob),
+ KUNIT_CASE(krealloc_pagealloc_less_oob),
KUNIT_CASE(kmalloc_oob_16),
KUNIT_CASE(kmalloc_uaf_16),
KUNIT_CASE(kmalloc_oob_in_memset),
--
2.30.0.365.g02bc693789-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 07/12] kasan, mm: remove krealloc side-effect
2021-02-01 19:43 [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS Andrey Konovalov
` (5 preceding siblings ...)
2021-02-01 19:43 ` [PATCH 06/12] kasan: rework krealloc tests Andrey Konovalov
@ 2021-02-01 19:43 ` Andrey Konovalov
2021-02-03 15:10 ` Marco Elver
2021-02-01 19:43 ` [PATCH 08/12] kasan, mm: optimize krealloc poisoning Andrey Konovalov
` (4 subsequent siblings)
11 siblings, 1 reply; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-01 19:43 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Andrey Konovalov, Kevin Brodsky, Will Deacon,
linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Peter Collingbourne,
Evgenii Stepanov
Currently, if krealloc() is called on a freed object with KASAN enabled,
it allocates and returns a new object, but doesn't copy any memory from
the old one as ksize() returns 0. This makes a caller believe that
krealloc() succeeded (KASAN report is printed though).
This patch adds an accessibility check into __do_krealloc(). If the check
fails, krealloc() returns NULL. This check duplicates the one in ksize();
this is fixed in the following patch.
This patch also adds a KASAN-KUnit test to check krealloc() behaviour
when it's called on a freed object.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
lib/test_kasan.c | 20 ++++++++++++++++++++
mm/slab_common.c | 3 +++
2 files changed, 23 insertions(+)
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 2bb52853f341..61bc894d9f7e 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -359,6 +359,25 @@ static void krealloc_pagealloc_less_oob(struct kunit *test)
KMALLOC_MAX_CACHE_SIZE + 201);
}
+/*
+ * Check that krealloc() detects a use-after-free, returns NULL,
+ * and doesn't unpoison the freed object.
+ */
+static void krealloc_uaf(struct kunit *test)
+{
+ char *ptr1, *ptr2;
+ int size1 = 201;
+ int size2 = 235;
+
+ ptr1 = kmalloc(size1, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);
+ kfree(ptr1);
+
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr2 = krealloc(ptr1, size2, GFP_KERNEL));
+ KUNIT_ASSERT_PTR_EQ(test, (void *)ptr2, NULL);
+ KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1);
+}
+
static void kmalloc_oob_16(struct kunit *test)
{
struct {
@@ -1056,6 +1075,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(krealloc_less_oob),
KUNIT_CASE(krealloc_pagealloc_more_oob),
KUNIT_CASE(krealloc_pagealloc_less_oob),
+ KUNIT_CASE(krealloc_uaf),
KUNIT_CASE(kmalloc_oob_16),
KUNIT_CASE(kmalloc_uaf_16),
KUNIT_CASE(kmalloc_oob_in_memset),
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 39d1a8ff9bb8..dad70239b54c 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1140,6 +1140,9 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
void *ret;
size_t ks;
+ if (likely(!ZERO_OR_NULL_PTR(p)) && !kasan_check_byte(p))
+ return NULL;
+
ks = ksize(p);
if (ks >= new_size) {
--
2.30.0.365.g02bc693789-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 08/12] kasan, mm: optimize krealloc poisoning
2021-02-01 19:43 [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS Andrey Konovalov
` (6 preceding siblings ...)
2021-02-01 19:43 ` [PATCH 07/12] kasan, mm: remove krealloc side-effect Andrey Konovalov
@ 2021-02-01 19:43 ` Andrey Konovalov
2021-02-03 14:34 ` Marco Elver
2021-02-01 19:43 ` [PATCH 09/12] kasan: ensure poisoning size alignment Andrey Konovalov
` (3 subsequent siblings)
11 siblings, 1 reply; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-01 19:43 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Andrey Konovalov, Kevin Brodsky, Will Deacon,
linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Peter Collingbourne,
Evgenii Stepanov
Currently, krealloc() always calls ksize(), which unpoisons the whole
object including the redzone. This is inefficient, as kasan_krealloc()
repoisons the redzone for objects that fit into the same buffer.
This patch changes krealloc() instrumentation to use uninstrumented
__ksize() that doesn't unpoison the memory. Instead, kasan_kreallos()
is changed to unpoison the memory excluding the redzone.
For objects that don't fit into the old allocation, this patch disables
KASAN accessibility checks when copying memory into a new object instead
of unpoisoning it.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
mm/kasan/common.c | 12 ++++++++++--
mm/slab_common.c | 20 ++++++++++++++------
2 files changed, 24 insertions(+), 8 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 9c64a00bbf9c..a51d6ea580b0 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -476,7 +476,7 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
/*
* The object has already been unpoisoned by kasan_slab_alloc() for
- * kmalloc() or by ksize() for krealloc().
+ * kmalloc() or by kasan_krealloc() for krealloc().
*/
/*
@@ -526,7 +526,7 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
/*
* The object has already been unpoisoned by kasan_alloc_pages() for
- * alloc_pages() or by ksize() for krealloc().
+ * alloc_pages() or by kasan_krealloc() for krealloc().
*/
/*
@@ -554,8 +554,16 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
if (unlikely(object == ZERO_SIZE_PTR))
return (void *)object;
+ /*
+ * Unpoison the object's data.
+ * Part of it might already have been unpoisoned, but it's unknown
+ * how big that part is.
+ */
+ kasan_unpoison(object, size);
+
page = virt_to_head_page(object);
+ /* Piggy-back on kmalloc() instrumentation to poison the redzone. */
if (unlikely(!PageSlab(page)))
return __kasan_kmalloc_large(object, size, flags);
else
diff --git a/mm/slab_common.c b/mm/slab_common.c
index dad70239b54c..821f657d38b5 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1140,19 +1140,27 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
void *ret;
size_t ks;
- if (likely(!ZERO_OR_NULL_PTR(p)) && !kasan_check_byte(p))
- return NULL;
-
- ks = ksize(p);
+ /* Don't use instrumented ksize to allow precise KASAN poisoning. */
+ if (likely(!ZERO_OR_NULL_PTR(p))) {
+ if (!kasan_check_byte(p))
+ return NULL;
+ ks = __ksize(p);
+ } else
+ ks = 0;
+ /* If the object still fits, repoison it precisely. */
if (ks >= new_size) {
p = kasan_krealloc((void *)p, new_size, flags);
return (void *)p;
}
ret = kmalloc_track_caller(new_size, flags);
- if (ret && p)
- memcpy(ret, p, ks);
+ if (ret && p) {
+ /* Disable KASAN checks as the object's redzone is accessed. */
+ kasan_disable_current();
+ memcpy(ret, kasan_reset_tag(p), ks);
+ kasan_enable_current();
+ }
return ret;
}
--
2.30.0.365.g02bc693789-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 09/12] kasan: ensure poisoning size alignment
2021-02-01 19:43 [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS Andrey Konovalov
` (7 preceding siblings ...)
2021-02-01 19:43 ` [PATCH 08/12] kasan, mm: optimize krealloc poisoning Andrey Konovalov
@ 2021-02-01 19:43 ` Andrey Konovalov
2021-02-03 15:31 ` Marco Elver
2021-02-01 19:43 ` [PATCH 10/12] arm64: kasan: simplify and inline MTE functions Andrey Konovalov
` (2 subsequent siblings)
11 siblings, 1 reply; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-01 19:43 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Andrey Konovalov, Kevin Brodsky, Will Deacon,
linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Peter Collingbourne,
Evgenii Stepanov
A previous changes d99f6a10c161 ("kasan: don't round_up too much")
attempted to simplify the code by adding a round_up(size) call into
kasan_poison(). While this allows to have less round_up() calls around
the code, this results in round_up() being called multiple times.
This patch removes round_up() of size from kasan_poison() and ensures
that all callers round_up() the size explicitly. This patch also adds
WARN_ON() alignment checks for address and size to kasan_poison() and
kasan_unpoison().
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
mm/kasan/common.c | 9 ++++++---
mm/kasan/kasan.h | 33 ++++++++++++++++++++-------------
mm/kasan/shadow.c | 37 ++++++++++++++++++++++---------------
3 files changed, 48 insertions(+), 31 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a51d6ea580b0..5691cca69397 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -261,7 +261,8 @@ void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
{
- kasan_poison(object, cache->object_size, KASAN_KMALLOC_REDZONE);
+ kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE),
+ KASAN_KMALLOC_REDZONE);
}
/*
@@ -348,7 +349,8 @@ static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
return true;
}
- kasan_poison(object, cache->object_size, KASAN_KMALLOC_FREE);
+ kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE),
+ KASAN_KMALLOC_FREE);
if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine))
return false;
@@ -490,7 +492,8 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
/* Poison the aligned part of the redzone. */
redzone_start = round_up((unsigned long)(object + size),
KASAN_GRANULE_SIZE);
- redzone_end = (unsigned long)object + cache->object_size;
+ redzone_end = round_up((unsigned long)(object + cache->object_size),
+ KASAN_GRANULE_SIZE);
kasan_poison((void *)redzone_start, redzone_end - redzone_start,
KASAN_KMALLOC_REDZONE);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 6a2882997f23..2f7400a3412f 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -321,30 +321,37 @@ static inline u8 kasan_random_tag(void) { return 0; }
#ifdef CONFIG_KASAN_HW_TAGS
-static inline void kasan_poison(const void *address, size_t size, u8 value)
+static inline void kasan_poison(const void *addr, size_t size, u8 value)
{
- address = kasan_reset_tag(address);
+ addr = kasan_reset_tag(addr);
/* Skip KFENCE memory if called explicitly outside of sl*b. */
- if (is_kfence_address(address))
+ if (is_kfence_address(addr))
return;
- hw_set_mem_tag_range((void *)address,
- round_up(size, KASAN_GRANULE_SIZE), value);
+ if (WARN_ON((u64)addr & KASAN_GRANULE_MASK))
+ return;
+ if (WARN_ON(size & KASAN_GRANULE_MASK))
+ return;
+
+ hw_set_mem_tag_range((void *)addr, size, value);
}
-static inline void kasan_unpoison(const void *address, size_t size)
+static inline void kasan_unpoison(const void *addr, size_t size)
{
- u8 tag = get_tag(address);
+ u8 tag = get_tag(addr);
- address = kasan_reset_tag(address);
+ addr = kasan_reset_tag(addr);
/* Skip KFENCE memory if called explicitly outside of sl*b. */
- if (is_kfence_address(address))
+ if (is_kfence_address(addr))
return;
- hw_set_mem_tag_range((void *)address,
- round_up(size, KASAN_GRANULE_SIZE), tag);
+ if (WARN_ON((u64)addr & KASAN_GRANULE_MASK))
+ return;
+ size = round_up(size, KASAN_GRANULE_SIZE);
+
+ hw_set_mem_tag_range((void *)addr, size, tag);
}
static inline bool kasan_byte_accessible(const void *addr)
@@ -361,7 +368,7 @@ static inline bool kasan_byte_accessible(const void *addr)
/**
* kasan_poison - mark the memory range as unaccessible
* @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
- * @size - range size
+ * @size - range size, must be aligned to KASAN_GRANULE_SIZE
* @value - value that's written to metadata for the range
*
* The size gets aligned to KASAN_GRANULE_SIZE before marking the range.
@@ -371,7 +378,7 @@ void kasan_poison(const void *addr, size_t size, u8 value);
/**
* kasan_unpoison - mark the memory range as accessible
* @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
- * @size - range size
+ * @size - range size, can be unaligned
*
* For the tag-based modes, the @size gets aligned to KASAN_GRANULE_SIZE before
* marking the range.
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 1ed7817e4ee6..c97f51c557ea 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -69,7 +69,7 @@ void *memcpy(void *dest, const void *src, size_t len)
return __memcpy(dest, src, len);
}
-void kasan_poison(const void *address, size_t size, u8 value)
+void kasan_poison(const void *addr, size_t size, u8 value)
{
void *shadow_start, *shadow_end;
@@ -78,55 +78,62 @@ void kasan_poison(const void *address, size_t size, u8 value)
* some of the callers (e.g. kasan_poison_object_data) pass tagged
* addresses to this function.
*/
- address = kasan_reset_tag(address);
+ addr = kasan_reset_tag(addr);
/* Skip KFENCE memory if called explicitly outside of sl*b. */
- if (is_kfence_address(address))
+ if (is_kfence_address(addr))
return;
- size = round_up(size, KASAN_GRANULE_SIZE);
- shadow_start = kasan_mem_to_shadow(address);
- shadow_end = kasan_mem_to_shadow(address + size);
+ if (WARN_ON((u64)addr & KASAN_GRANULE_MASK))
+ return;
+ if (WARN_ON(size & KASAN_GRANULE_MASK))
+ return;
+
+ shadow_start = kasan_mem_to_shadow(addr);
+ shadow_end = kasan_mem_to_shadow(addr + size);
__memset(shadow_start, value, shadow_end - shadow_start);
}
EXPORT_SYMBOL(kasan_poison);
#ifdef CONFIG_KASAN_GENERIC
-void kasan_poison_last_granule(const void *address, size_t size)
+void kasan_poison_last_granule(const void *addr, size_t size)
{
if (size & KASAN_GRANULE_MASK) {
- u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
+ u8 *shadow = (u8 *)kasan_mem_to_shadow(addr + size);
*shadow = size & KASAN_GRANULE_MASK;
}
}
#endif
-void kasan_unpoison(const void *address, size_t size)
+void kasan_unpoison(const void *addr, size_t size)
{
- u8 tag = get_tag(address);
+ u8 tag = get_tag(addr);
/*
* Perform shadow offset calculation based on untagged address, as
* some of the callers (e.g. kasan_unpoison_object_data) pass tagged
* addresses to this function.
*/
- address = kasan_reset_tag(address);
+ addr = kasan_reset_tag(addr);
/*
* Skip KFENCE memory if called explicitly outside of sl*b. Also note
* that calls to ksize(), where size is not a multiple of machine-word
* size, would otherwise poison the invalid portion of the word.
*/
- if (is_kfence_address(address))
+ if (is_kfence_address(addr))
+ return;
+
+ if (WARN_ON((u64)addr & KASAN_GRANULE_MASK))
return;
- /* Unpoison round_up(size, KASAN_GRANULE_SIZE) bytes. */
- kasan_poison(address, size, tag);
+ /* Unpoison all granules that cover the object. */
+ kasan_poison(addr, round_up(size, KASAN_GRANULE_SIZE), tag);
/* Partially poison the last granule for the generic mode. */
if (IS_ENABLED(CONFIG_KASAN_GENERIC))
- kasan_poison_last_granule(address, size);
+ kasan_poison_last_granule(addr, size);
}
#ifdef CONFIG_MEMORY_HOTPLUG
--
2.30.0.365.g02bc693789-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 10/12] arm64: kasan: simplify and inline MTE functions
2021-02-01 19:43 [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS Andrey Konovalov
` (8 preceding siblings ...)
2021-02-01 19:43 ` [PATCH 09/12] kasan: ensure poisoning size alignment Andrey Konovalov
@ 2021-02-01 19:43 ` Andrey Konovalov
2021-02-01 22:44 ` Andrew Morton
` (2 more replies)
2021-02-01 19:43 ` [PATCH 11/12] kasan: always inline HW_TAGS helper functions Andrey Konovalov
2021-02-01 19:43 ` [PATCH 12/12] arm64: kasan: export MTE symbols for KASAN tests Andrey Konovalov
11 siblings, 3 replies; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-01 19:43 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Andrey Konovalov, Kevin Brodsky, Will Deacon,
linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Peter Collingbourne,
Evgenii Stepanov
This change provides a simpler implementation of mte_get_mem_tag(),
mte_get_random_tag(), and mte_set_mem_tag_range().
Simplifications include removing system_supports_mte() checks as these
functions are onlye called from KASAN runtime that had already checked
system_supports_mte(). Besides that, size and address alignment checks
are removed from mte_set_mem_tag_range(), as KASAN now does those.
This change also moves these functions into the asm/mte-kasan.h header
and implements mte_set_mem_tag_range() via inline assembly to avoid
unnecessary functions calls.
Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
arch/arm64/include/asm/cache.h | 1 -
arch/arm64/include/asm/kasan.h | 1 +
arch/arm64/include/asm/mte-def.h | 2 +
arch/arm64/include/asm/mte-kasan.h | 64 ++++++++++++++++++++++++++----
arch/arm64/include/asm/mte.h | 2 -
arch/arm64/kernel/mte.c | 46 ---------------------
arch/arm64/lib/mte.S | 16 --------
7 files changed, 60 insertions(+), 72 deletions(-)
diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index 77cbbe3625f2..a074459f8f2f 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -6,7 +6,6 @@
#define __ASM_CACHE_H
#include <asm/cputype.h>
-#include <asm/mte-kasan.h>
#define CTR_L1IP_SHIFT 14
#define CTR_L1IP_MASK 3
diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index 0aaf9044cd6a..12d5f47f7dbe 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -6,6 +6,7 @@
#include <linux/linkage.h>
#include <asm/memory.h>
+#include <asm/mte-kasan.h>
#include <asm/pgtable-types.h>
#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
diff --git a/arch/arm64/include/asm/mte-def.h b/arch/arm64/include/asm/mte-def.h
index 2d73a1612f09..cf241b0f0a42 100644
--- a/arch/arm64/include/asm/mte-def.h
+++ b/arch/arm64/include/asm/mte-def.h
@@ -11,4 +11,6 @@
#define MTE_TAG_SIZE 4
#define MTE_TAG_MASK GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE - 1)), MTE_TAG_SHIFT)
+#define __MTE_PREAMBLE ARM64_ASM_PREAMBLE ".arch_extension memtag\n"
+
#endif /* __ASM_MTE_DEF_H */
diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
index 8ad981069afb..1f090beda7e6 100644
--- a/arch/arm64/include/asm/mte-kasan.h
+++ b/arch/arm64/include/asm/mte-kasan.h
@@ -11,13 +11,16 @@
#include <linux/types.h>
+#ifdef CONFIG_ARM64_MTE
+
/*
- * The functions below are meant to be used only for the
- * KASAN_HW_TAGS interface defined in asm/memory.h.
+ * These functions are meant to be only used from KASAN runtime through
+ * the arch_*() interface defined in asm/memory.h.
+ * These functions don't include system_supports_mte() checks,
+ * as KASAN only calls them when MTE is supported and enabled.
*/
-#ifdef CONFIG_ARM64_MTE
-static inline u8 mte_get_ptr_tag(void *ptr)
+static __always_inline u8 mte_get_ptr_tag(void *ptr)
{
/* Note: The format of KASAN tags is 0xF<x> */
u8 tag = 0xF0 | (u8)(((u64)(ptr)) >> MTE_TAG_SHIFT);
@@ -25,9 +28,54 @@ static inline u8 mte_get_ptr_tag(void *ptr)
return tag;
}
-u8 mte_get_mem_tag(void *addr);
-u8 mte_get_random_tag(void);
-void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag);
+/* Get allocation tag for the address. */
+static __always_inline u8 mte_get_mem_tag(void *addr)
+{
+ asm(__MTE_PREAMBLE "ldg %0, [%0]"
+ : "+r" (addr));
+
+ return mte_get_ptr_tag(addr);
+}
+
+/* Generate a random tag. */
+static __always_inline u8 mte_get_random_tag(void)
+{
+ void *addr;
+
+ asm(__MTE_PREAMBLE "irg %0, %0"
+ : "+r" (addr));
+
+ return mte_get_ptr_tag(addr);
+}
+
+/*
+ * Assign allocation tags for a region of memory based on the pointer tag.
+ * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
+ * size must be non-zero and MTE_GRANULE_SIZE aligned.
+ */
+static __always_inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
+{
+ u64 curr, end;
+
+ if (!size)
+ return;
+
+ curr = (u64)__tag_set(addr, tag);
+ end = curr + size;
+
+ do {
+ /*
+ * 'asm volatile' is required to prevent the compiler to move
+ * the statement outside of the loop.
+ */
+ asm volatile(__MTE_PREAMBLE "stg %0, [%0]"
+ :
+ : "r" (curr)
+ : "memory");
+
+ curr += MTE_GRANULE_SIZE;
+ } while (curr != end);
+}
void mte_enable_kernel_sync(void);
void mte_enable_kernel_async(void);
@@ -47,10 +95,12 @@ static inline u8 mte_get_mem_tag(void *addr)
{
return 0xFF;
}
+
static inline u8 mte_get_random_tag(void)
{
return 0xFF;
}
+
static inline void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
{
return addr;
diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index 237bb2f7309d..43169b978cd3 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -8,8 +8,6 @@
#include <asm/compiler.h>
#include <asm/mte-def.h>
-#define __MTE_PREAMBLE ARM64_ASM_PREAMBLE ".arch_extension memtag\n"
-
#ifndef __ASSEMBLY__
#include <linux/bitfield.h>
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index 7763ac1f2917..8b27b70e1aac 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -19,7 +19,6 @@
#include <asm/barrier.h>
#include <asm/cpufeature.h>
#include <asm/mte.h>
-#include <asm/mte-kasan.h>
#include <asm/ptrace.h>
#include <asm/sysreg.h>
@@ -88,51 +87,6 @@ int memcmp_pages(struct page *page1, struct page *page2)
return ret;
}
-u8 mte_get_mem_tag(void *addr)
-{
- if (!system_supports_mte())
- return 0xFF;
-
- asm(__MTE_PREAMBLE "ldg %0, [%0]"
- : "+r" (addr));
-
- return mte_get_ptr_tag(addr);
-}
-
-u8 mte_get_random_tag(void)
-{
- void *addr;
-
- if (!system_supports_mte())
- return 0xFF;
-
- asm(__MTE_PREAMBLE "irg %0, %0"
- : "+r" (addr));
-
- return mte_get_ptr_tag(addr);
-}
-
-void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
-{
- void *ptr = addr;
-
- if ((!system_supports_mte()) || (size == 0))
- return addr;
-
- /* Make sure that size is MTE granule aligned. */
- WARN_ON(size & (MTE_GRANULE_SIZE - 1));
-
- /* Make sure that the address is MTE granule aligned. */
- WARN_ON((u64)addr & (MTE_GRANULE_SIZE - 1));
-
- tag = 0xF0 | tag;
- ptr = (void *)__tag_set(ptr, tag);
-
- mte_assign_mem_tag_range(ptr, size);
-
- return ptr;
-}
-
void mte_init_tags(u64 max_tag)
{
static bool gcr_kernel_excl_initialized;
diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
index 9e1a12e10053..351537c12f36 100644
--- a/arch/arm64/lib/mte.S
+++ b/arch/arm64/lib/mte.S
@@ -149,19 +149,3 @@ SYM_FUNC_START(mte_restore_page_tags)
ret
SYM_FUNC_END(mte_restore_page_tags)
-
-/*
- * Assign allocation tags for a region of memory based on the pointer tag
- * x0 - source pointer
- * x1 - size
- *
- * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
- * size must be non-zero and MTE_GRANULE_SIZE aligned.
- */
-SYM_FUNC_START(mte_assign_mem_tag_range)
-1: stg x0, [x0]
- add x0, x0, #MTE_GRANULE_SIZE
- subs x1, x1, #MTE_GRANULE_SIZE
- b.gt 1b
- ret
-SYM_FUNC_END(mte_assign_mem_tag_range)
--
2.30.0.365.g02bc693789-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 11/12] kasan: always inline HW_TAGS helper functions
2021-02-01 19:43 [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS Andrey Konovalov
` (9 preceding siblings ...)
2021-02-01 19:43 ` [PATCH 10/12] arm64: kasan: simplify and inline MTE functions Andrey Konovalov
@ 2021-02-01 19:43 ` Andrey Konovalov
2021-02-03 15:51 ` Marco Elver
2021-02-01 19:43 ` [PATCH 12/12] arm64: kasan: export MTE symbols for KASAN tests Andrey Konovalov
11 siblings, 1 reply; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-01 19:43 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Andrey Konovalov, Kevin Brodsky, Will Deacon,
linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Peter Collingbourne,
Evgenii Stepanov
Mark all static functions in common.c and kasan.h that are used for
hardware tag-based KASAN as __always_inline to avoid unnecessary
function calls.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
mm/kasan/common.c | 13 +++++++------
mm/kasan/kasan.h | 6 +++---
2 files changed, 10 insertions(+), 9 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 5691cca69397..2004ecd6e43c 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -279,7 +279,8 @@ void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
* based on objects indexes, so that objects that are next to each other
* get different tags.
*/
-static u8 assign_tag(struct kmem_cache *cache, const void *object, bool init)
+static __always_inline u8 assign_tag(struct kmem_cache *cache,
+ const void *object, bool init)
{
if (IS_ENABLED(CONFIG_KASAN_GENERIC))
return 0xff;
@@ -321,8 +322,8 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
return (void *)object;
}
-static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
- unsigned long ip, bool quarantine)
+static __always_inline bool ____kasan_slab_free(struct kmem_cache *cache,
+ void *object, unsigned long ip, bool quarantine)
{
u8 tag;
void *tagged_object;
@@ -366,7 +367,7 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
return ____kasan_slab_free(cache, object, ip, true);
}
-static bool ____kasan_kfree_large(void *ptr, unsigned long ip)
+static __always_inline bool ____kasan_kfree_large(void *ptr, unsigned long ip)
{
if (ptr != page_address(virt_to_head_page(ptr))) {
kasan_report_invalid_free(ptr, ip);
@@ -461,8 +462,8 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
return tagged_object;
}
-static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
- size_t size, gfp_t flags)
+static __always_inline void *____kasan_kmalloc(struct kmem_cache *cache,
+ const void *object, size_t size, gfp_t flags)
{
unsigned long redzone_start;
unsigned long redzone_end;
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 2f7400a3412f..d5fe72747a53 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -321,7 +321,7 @@ static inline u8 kasan_random_tag(void) { return 0; }
#ifdef CONFIG_KASAN_HW_TAGS
-static inline void kasan_poison(const void *addr, size_t size, u8 value)
+static __always_inline void kasan_poison(const void *addr, size_t size, u8 value)
{
addr = kasan_reset_tag(addr);
@@ -337,7 +337,7 @@ static inline void kasan_poison(const void *addr, size_t size, u8 value)
hw_set_mem_tag_range((void *)addr, size, value);
}
-static inline void kasan_unpoison(const void *addr, size_t size)
+static __always_inline void kasan_unpoison(const void *addr, size_t size)
{
u8 tag = get_tag(addr);
@@ -354,7 +354,7 @@ static inline void kasan_unpoison(const void *addr, size_t size)
hw_set_mem_tag_range((void *)addr, size, tag);
}
-static inline bool kasan_byte_accessible(const void *addr)
+static __always_inline bool kasan_byte_accessible(const void *addr)
{
u8 ptr_tag = get_tag(addr);
u8 mem_tag = hw_get_mem_tag((void *)addr);
--
2.30.0.365.g02bc693789-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 12/12] arm64: kasan: export MTE symbols for KASAN tests
2021-02-01 19:43 [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS Andrey Konovalov
` (10 preceding siblings ...)
2021-02-01 19:43 ` [PATCH 11/12] kasan: always inline HW_TAGS helper functions Andrey Konovalov
@ 2021-02-01 19:43 ` Andrey Konovalov
2021-02-02 10:46 ` Will Deacon
2021-02-02 15:43 ` Catalin Marinas
11 siblings, 2 replies; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-01 19:43 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Andrey Konovalov, Kevin Brodsky, Will Deacon,
linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Peter Collingbourne,
Evgenii Stepanov
Export mte_enable_kernel() and mte_set_report_once() to fix:
ERROR: modpost: "mte_enable_kernel" [lib/test_kasan.ko] undefined!
ERROR: modpost: "mte_set_report_once" [lib/test_kasan.ko] undefined!
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
arch/arm64/kernel/mte.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index 8b27b70e1aac..2c91bd288ea4 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -120,6 +120,7 @@ void mte_enable_kernel_sync(void)
{
__mte_enable_kernel("synchronous", SCTLR_ELx_TCF_SYNC);
}
+EXPORT_SYMBOL(mte_enable_kernel_sync);
void mte_enable_kernel_async(void)
{
@@ -130,6 +131,7 @@ void mte_set_report_once(bool state)
{
WRITE_ONCE(report_fault_once, state);
}
+EXPORT_SYMBOL(mte_set_report_once);
bool mte_report_once(void)
{
--
2.30.0.365.g02bc693789-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [PATCH 10/12] arm64: kasan: simplify and inline MTE functions
2021-02-01 19:43 ` [PATCH 10/12] arm64: kasan: simplify and inline MTE functions Andrey Konovalov
@ 2021-02-01 22:44 ` Andrew Morton
2021-02-04 12:39 ` Vincenzo Frascino
2021-02-02 15:42 ` Catalin Marinas
2021-02-04 12:37 ` Vincenzo Frascino
2 siblings, 1 reply; 35+ messages in thread
From: Andrew Morton @ 2021-02-01 22:44 UTC (permalink / raw)
To: Andrey Konovalov
Cc: linux-arm-kernel, Marco Elver, Catalin Marinas, Kevin Brodsky,
Will Deacon, Branislav Rankov, kasan-dev, linux-kernel, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Vincenzo Frascino, Peter Collingbourne, Dmitry Vyukov
On Mon, 1 Feb 2021 20:43:34 +0100 Andrey Konovalov <andreyknvl@google.com> wrote:
> This change provides a simpler implementation of mte_get_mem_tag(),
> mte_get_random_tag(), and mte_set_mem_tag_range().
>
> Simplifications include removing system_supports_mte() checks as these
> functions are onlye called from KASAN runtime that had already checked
> system_supports_mte(). Besides that, size and address alignment checks
> are removed from mte_set_mem_tag_range(), as KASAN now does those.
>
> This change also moves these functions into the asm/mte-kasan.h header
> and implements mte_set_mem_tag_range() via inline assembly to avoid
> unnecessary functions calls.
>
> Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Co-developed-by requires a Signed-off-by: as well. Vincenzo, please
send us one?
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 12/12] arm64: kasan: export MTE symbols for KASAN tests
2021-02-01 19:43 ` [PATCH 12/12] arm64: kasan: export MTE symbols for KASAN tests Andrey Konovalov
@ 2021-02-02 10:46 ` Will Deacon
2021-02-02 13:42 ` Andrey Konovalov
2021-02-02 15:43 ` Catalin Marinas
1 sibling, 1 reply; 35+ messages in thread
From: Will Deacon @ 2021-02-02 10:46 UTC (permalink / raw)
To: Andrey Konovalov
Cc: Marco Elver, Catalin Marinas, Kevin Brodsky, Will Deacon,
Branislav Rankov, kasan-dev, linux-kernel, linux-mm,
Evgenii Stepanov, Alexander Potapenko, linux-arm-kernel,
Andrey Ryabinin, Andrew Morton, Vincenzo Frascino,
Peter Collingbourne, Dmitry Vyukov
On Mon, Feb 01, 2021 at 08:43:36PM +0100, Andrey Konovalov wrote:
> Export mte_enable_kernel() and mte_set_report_once() to fix:
>
> ERROR: modpost: "mte_enable_kernel" [lib/test_kasan.ko] undefined!
> ERROR: modpost: "mte_set_report_once" [lib/test_kasan.ko] undefined!
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
> arch/arm64/kernel/mte.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 8b27b70e1aac..2c91bd288ea4 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -120,6 +120,7 @@ void mte_enable_kernel_sync(void)
> {
> __mte_enable_kernel("synchronous", SCTLR_ELx_TCF_SYNC);
> }
> +EXPORT_SYMBOL(mte_enable_kernel_sync);
>
> void mte_enable_kernel_async(void)
> {
> @@ -130,6 +131,7 @@ void mte_set_report_once(bool state)
> {
> WRITE_ONCE(report_fault_once, state);
> }
> +EXPORT_SYMBOL(mte_set_report_once);
EXPORT_SYMBOL_GPL ?
Will
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 12/12] arm64: kasan: export MTE symbols for KASAN tests
2021-02-02 10:46 ` Will Deacon
@ 2021-02-02 13:42 ` Andrey Konovalov
0 siblings, 0 replies; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-02 13:42 UTC (permalink / raw)
To: Will Deacon
Cc: Marco Elver, Catalin Marinas, Kevin Brodsky, Will Deacon,
Branislav Rankov, kasan-dev, LKML, Linux Memory Management List,
Evgenii Stepanov, Alexander Potapenko, Linux ARM,
Andrey Ryabinin, Andrew Morton, Vincenzo Frascino,
Peter Collingbourne, Dmitry Vyukov
On Tue, Feb 2, 2021 at 11:46 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, Feb 01, 2021 at 08:43:36PM +0100, Andrey Konovalov wrote:
> > Export mte_enable_kernel() and mte_set_report_once() to fix:
> >
> > ERROR: modpost: "mte_enable_kernel" [lib/test_kasan.ko] undefined!
> > ERROR: modpost: "mte_set_report_once" [lib/test_kasan.ko] undefined!
> >
> > Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> > ---
> > arch/arm64/kernel/mte.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> > index 8b27b70e1aac..2c91bd288ea4 100644
> > --- a/arch/arm64/kernel/mte.c
> > +++ b/arch/arm64/kernel/mte.c
> > @@ -120,6 +120,7 @@ void mte_enable_kernel_sync(void)
> > {
> > __mte_enable_kernel("synchronous", SCTLR_ELx_TCF_SYNC);
> > }
> > +EXPORT_SYMBOL(mte_enable_kernel_sync);
> >
> > void mte_enable_kernel_async(void)
> > {
> > @@ -130,6 +131,7 @@ void mte_set_report_once(bool state)
> > {
> > WRITE_ONCE(report_fault_once, state);
> > }
> > +EXPORT_SYMBOL(mte_set_report_once);
>
> EXPORT_SYMBOL_GPL ?
SGTM, will do in v2, thanks!
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 10/12] arm64: kasan: simplify and inline MTE functions
2021-02-01 19:43 ` [PATCH 10/12] arm64: kasan: simplify and inline MTE functions Andrey Konovalov
2021-02-01 22:44 ` Andrew Morton
@ 2021-02-02 15:42 ` Catalin Marinas
2021-02-02 18:04 ` Andrey Konovalov
2021-02-04 12:37 ` Vincenzo Frascino
2 siblings, 1 reply; 35+ messages in thread
From: Catalin Marinas @ 2021-02-02 15:42 UTC (permalink / raw)
To: Andrey Konovalov
Cc: linux-arm-kernel, Marco Elver, Kevin Brodsky, Will Deacon,
Branislav Rankov, kasan-dev, linux-kernel, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Mon, Feb 01, 2021 at 08:43:34PM +0100, Andrey Konovalov wrote:
> +/*
> + * Assign allocation tags for a region of memory based on the pointer tag.
> + * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
> + * size must be non-zero and MTE_GRANULE_SIZE aligned.
> + */
OK, so we rely on the caller to sanity-check the range. Fine by me but I
can see (un)poison_range() only doing this for the size. Do we guarantee
that the start address is aligned?
> +static __always_inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
> +{
> + u64 curr, end;
> +
> + if (!size)
> + return;
> +
> + curr = (u64)__tag_set(addr, tag);
> + end = curr + size;
> +
> + do {
> + /*
> + * 'asm volatile' is required to prevent the compiler to move
> + * the statement outside of the loop.
> + */
> + asm volatile(__MTE_PREAMBLE "stg %0, [%0]"
> + :
> + : "r" (curr)
> + : "memory");
> +
> + curr += MTE_GRANULE_SIZE;
> + } while (curr != end);
> +}
>
> void mte_enable_kernel_sync(void);
> void mte_enable_kernel_async(void);
> @@ -47,10 +95,12 @@ static inline u8 mte_get_mem_tag(void *addr)
> {
> return 0xFF;
> }
> +
> static inline u8 mte_get_random_tag(void)
> {
> return 0xFF;
> }
> +
> static inline void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
This function used to return a pointer and that's what the dummy static
inline does here. However, the new mte_set_mem_tag_range() doesn't
return anything. We should have consistency between the two (the new
static void definition is fine by me).
Otherwise the patch looks fine.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 12/12] arm64: kasan: export MTE symbols for KASAN tests
2021-02-01 19:43 ` [PATCH 12/12] arm64: kasan: export MTE symbols for KASAN tests Andrey Konovalov
2021-02-02 10:46 ` Will Deacon
@ 2021-02-02 15:43 ` Catalin Marinas
1 sibling, 0 replies; 35+ messages in thread
From: Catalin Marinas @ 2021-02-02 15:43 UTC (permalink / raw)
To: Andrey Konovalov
Cc: linux-arm-kernel, Marco Elver, Kevin Brodsky, Will Deacon,
Branislav Rankov, kasan-dev, linux-kernel, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Mon, Feb 01, 2021 at 08:43:36PM +0100, Andrey Konovalov wrote:
> Export mte_enable_kernel() and mte_set_report_once() to fix:
>
> ERROR: modpost: "mte_enable_kernel" [lib/test_kasan.ko] undefined!
> ERROR: modpost: "mte_set_report_once" [lib/test_kasan.ko] undefined!
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
> arch/arm64/kernel/mte.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 8b27b70e1aac..2c91bd288ea4 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -120,6 +120,7 @@ void mte_enable_kernel_sync(void)
> {
> __mte_enable_kernel("synchronous", SCTLR_ELx_TCF_SYNC);
> }
> +EXPORT_SYMBOL(mte_enable_kernel_sync);
>
> void mte_enable_kernel_async(void)
> {
> @@ -130,6 +131,7 @@ void mte_set_report_once(bool state)
> {
> WRITE_ONCE(report_fault_once, state);
> }
> +EXPORT_SYMBOL(mte_set_report_once);
With EXPORT_SYMBOL_GPL:
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 01/12] kasan, mm: don't save alloc stacks twice
2021-02-01 19:43 ` [PATCH 01/12] kasan, mm: don't save alloc stacks twice Andrey Konovalov
@ 2021-02-02 16:06 ` Marco Elver
2021-02-02 18:01 ` Andrey Konovalov
0 siblings, 1 reply; 35+ messages in thread
From: Marco Elver @ 2021-02-02 16:06 UTC (permalink / raw)
To: Andrey Konovalov
Cc: linux-arm-kernel, Branislav Rankov, Catalin Marinas,
Kevin Brodsky, Will Deacon, linux-kernel, kasan-dev, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Mon, Feb 01, 2021 at 08:43PM +0100, Andrey Konovalov wrote:
> Currently KASAN saves allocation stacks in both kasan_slab_alloc() and
> kasan_kmalloc() annotations. This patch changes KASAN to save allocation
> stacks for slab objects from kmalloc caches in kasan_kmalloc() only,
> and stacks for other slab objects in kasan_slab_alloc() only.
>
> This change requires ____kasan_kmalloc() knowing whether the object
> belongs to a kmalloc cache. This is implemented by adding a flag field
> to the kasan_info structure. That flag is only set for kmalloc caches
> via a new kasan_cache_create_kmalloc() annotation.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
> ---
> include/linux/kasan.h | 9 +++++++++
> mm/kasan/common.c | 18 ++++++++++++++----
> mm/slab_common.c | 1 +
> 3 files changed, 24 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 6d8f3227c264..2d5de4092185 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -83,6 +83,7 @@ static inline void kasan_disable_current(void) {}
> struct kasan_cache {
> int alloc_meta_offset;
> int free_meta_offset;
> + bool is_kmalloc;
> };
>
> #ifdef CONFIG_KASAN_HW_TAGS
> @@ -143,6 +144,13 @@ static __always_inline void kasan_cache_create(struct kmem_cache *cache,
> __kasan_cache_create(cache, size, flags);
> }
>
> +void __kasan_cache_create_kmalloc(struct kmem_cache *cache);
> +static __always_inline void kasan_cache_create_kmalloc(struct kmem_cache *cache)
> +{
> + if (kasan_enabled())
> + __kasan_cache_create_kmalloc(cache);
> +}
> +
> size_t __kasan_metadata_size(struct kmem_cache *cache);
> static __always_inline size_t kasan_metadata_size(struct kmem_cache *cache)
> {
> @@ -278,6 +286,7 @@ static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> static inline void kasan_cache_create(struct kmem_cache *cache,
> unsigned int *size,
> slab_flags_t *flags) {}
> +static inline void kasan_cache_create_kmalloc(struct kmem_cache *cache) {}
> static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
> static inline void kasan_poison_slab(struct page *page) {}
> static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index fe852f3cfa42..374049564ea3 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -210,6 +210,11 @@ void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> *size = optimal_size;
> }
>
> +void __kasan_cache_create_kmalloc(struct kmem_cache *cache)
> +{
> + cache->kasan_info.is_kmalloc = true;
> +}
> +
> size_t __kasan_metadata_size(struct kmem_cache *cache)
> {
> if (!kasan_stack_collection_enabled())
> @@ -394,17 +399,22 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
> }
> }
>
> -static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> +static void set_alloc_info(struct kmem_cache *cache, void *object,
> + gfp_t flags, bool kmalloc)
> {
> struct kasan_alloc_meta *alloc_meta;
>
> + /* Don't save alloc info for kmalloc caches in kasan_slab_alloc(). */
> + if (cache->kasan_info.is_kmalloc && !kmalloc)
> + return;
> +
> alloc_meta = kasan_get_alloc_meta(cache, object);
> if (alloc_meta)
> kasan_set_track(&alloc_meta->alloc_track, flags);
> }
>
> static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> - size_t size, gfp_t flags, bool keep_tag)
> + size_t size, gfp_t flags, bool kmalloc)
> {
> unsigned long redzone_start;
> unsigned long redzone_end;
> @@ -423,7 +433,7 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> KASAN_GRANULE_SIZE);
> redzone_end = round_up((unsigned long)object + cache->object_size,
> KASAN_GRANULE_SIZE);
> - tag = assign_tag(cache, object, false, keep_tag);
> + tag = assign_tag(cache, object, false, kmalloc);
>
> /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */
> kasan_unpoison(set_tag(object, tag), size);
> @@ -431,7 +441,7 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> KASAN_KMALLOC_REDZONE);
>
> if (kasan_stack_collection_enabled())
> - set_alloc_info(cache, (void *)object, flags);
> + set_alloc_info(cache, (void *)object, flags, kmalloc);
It doesn't bother me too much, but: 'bool kmalloc' shadows function
'kmalloc' so this is technically fine, but using 'kmalloc' as the
variable name here might be confusing and there is a small chance it
might cause problems in a future refactor.
> return set_tag(object, tag);
> }
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 9aa3d2fe4c55..39d1a8ff9bb8 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -647,6 +647,7 @@ struct kmem_cache *__init create_kmalloc_cache(const char *name,
> panic("Out of memory when creating slab %s\n", name);
>
> create_boot_cache(s, name, size, flags, useroffset, usersize);
> + kasan_cache_create_kmalloc(s);
> list_add(&s->list, &slab_caches);
> s->refcount = 1;
> return s;
> --
> 2.30.0.365.g02bc693789-goog
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 02/12] kasan, mm: optimize kmalloc poisoning
2021-02-01 19:43 ` [PATCH 02/12] kasan, mm: optimize kmalloc poisoning Andrey Konovalov
@ 2021-02-02 16:25 ` Marco Elver
2021-02-02 17:15 ` Andrey Konovalov
0 siblings, 1 reply; 35+ messages in thread
From: Marco Elver @ 2021-02-02 16:25 UTC (permalink / raw)
To: Andrey Konovalov
Cc: linux-arm-kernel, Branislav Rankov, Catalin Marinas,
Kevin Brodsky, Will Deacon, linux-kernel, kasan-dev, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Mon, Feb 01, 2021 at 08:43PM +0100, Andrey Konovalov wrote:
> For allocations from kmalloc caches, kasan_kmalloc() always follows
> kasan_slab_alloc(). Currenly, both of them unpoison the whole object,
> which is unnecessary.
>
> This patch provides separate implementations for both annotations:
> kasan_slab_alloc() unpoisons the whole object, and kasan_kmalloc()
> only poisons the redzone.
>
> For generic KASAN, the redzone start might not be aligned to
> KASAN_GRANULE_SIZE. Therefore, the poisoning is split in two parts:
> kasan_poison_last_granule() poisons the unaligned part, and then
> kasan_poison() poisons the rest.
>
> This patch also clarifies alignment guarantees of each of the poisoning
> functions and drops the unnecessary round_up() call for redzone_end.
>
> With this change, the early SLUB cache annotation needs to be changed to
> kasan_slab_alloc(), as kasan_kmalloc() doesn't unpoison objects now.
> The number of poisoned bytes for objects in this cache stays the same, as
> kmem_cache_node->object_size is equal to sizeof(struct kmem_cache_node).
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
> mm/kasan/common.c | 93 +++++++++++++++++++++++++++++++----------------
> mm/kasan/kasan.h | 43 +++++++++++++++++++++-
> mm/kasan/shadow.c | 28 +++++++-------
> mm/slub.c | 3 +-
> 4 files changed, 119 insertions(+), 48 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 374049564ea3..128cb330ca73 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -278,21 +278,11 @@ void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
> * based on objects indexes, so that objects that are next to each other
> * get different tags.
> */
> -static u8 assign_tag(struct kmem_cache *cache, const void *object,
> - bool init, bool keep_tag)
> +static u8 assign_tag(struct kmem_cache *cache, const void *object, bool init)
> {
> if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> return 0xff;
>
> - /*
> - * 1. When an object is kmalloc()'ed, two hooks are called:
> - * kasan_slab_alloc() and kasan_kmalloc(). We assign the
> - * tag only in the first one.
> - * 2. We reuse the same tag for krealloc'ed objects.
> - */
> - if (keep_tag)
> - return get_tag(object);
> -
> /*
> * If the cache neither has a constructor nor has SLAB_TYPESAFE_BY_RCU
> * set, assign a tag when the object is being allocated (init == false).
> @@ -325,7 +315,7 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> }
>
> /* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */
> - object = set_tag(object, assign_tag(cache, object, true, false));
> + object = set_tag(object, assign_tag(cache, object, true));
>
> return (void *)object;
> }
> @@ -413,12 +403,46 @@ static void set_alloc_info(struct kmem_cache *cache, void *object,
> kasan_set_track(&alloc_meta->alloc_track, flags);
> }
>
> +void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
> + void *object, gfp_t flags)
> +{
> + u8 tag;
> + void *tagged_object;
> +
> + if (gfpflags_allow_blocking(flags))
> + kasan_quarantine_reduce();
> +
> + if (unlikely(object == NULL))
> + return NULL;
> +
> + if (is_kfence_address(object))
> + return (void *)object;
> +
> + /*
> + * Generate and assign random tag for tag-based modes.
> + * Tag is ignored in set_tag() for the generic mode.
> + */
> + tag = assign_tag(cache, object, false);
> + tagged_object = set_tag(object, tag);
> +
> + /*
> + * Unpoison the whole object.
> + * For kmalloc() allocations, kasan_kmalloc() will do precise poisoning.
> + */
> + kasan_unpoison(tagged_object, cache->object_size);
> +
> + /* Save alloc info (if possible) for non-kmalloc() allocations. */
> + if (kasan_stack_collection_enabled())
> + set_alloc_info(cache, (void *)object, flags, false);
> +
> + return tagged_object;
> +}
> +
> static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> - size_t size, gfp_t flags, bool kmalloc)
> + size_t size, gfp_t flags)
> {
> unsigned long redzone_start;
> unsigned long redzone_end;
> - u8 tag;
>
> if (gfpflags_allow_blocking(flags))
> kasan_quarantine_reduce();
> @@ -429,33 +453,41 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> if (is_kfence_address(kasan_reset_tag(object)))
> return (void *)object;
>
> + /*
> + * The object has already been unpoisoned by kasan_slab_alloc() for
> + * kmalloc() or by ksize() for krealloc().
> + */
> +
> + /*
> + * The redzone has byte-level precision for the generic mode.
> + * Partially poison the last object granule to cover the unaligned
> + * part of the redzone.
> + */
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> + kasan_poison_last_granule((void *)object, size);
> +
> + /* Poison the aligned part of the redzone. */
> redzone_start = round_up((unsigned long)(object + size),
> KASAN_GRANULE_SIZE);
> - redzone_end = round_up((unsigned long)object + cache->object_size,
> - KASAN_GRANULE_SIZE);
> - tag = assign_tag(cache, object, false, kmalloc);
> -
> - /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */
> - kasan_unpoison(set_tag(object, tag), size);
> + redzone_end = (unsigned long)object + cache->object_size;
> kasan_poison((void *)redzone_start, redzone_end - redzone_start,
> KASAN_KMALLOC_REDZONE);
>
> + /*
> + * Save alloc info (if possible) for kmalloc() allocations.
> + * This also rewrites the alloc info when called from kasan_krealloc().
> + */
> if (kasan_stack_collection_enabled())
> - set_alloc_info(cache, (void *)object, flags, kmalloc);
> + set_alloc_info(cache, (void *)object, flags, true);
>
> - return set_tag(object, tag);
> -}
> -
> -void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
> - void *object, gfp_t flags)
> -{
> - return ____kasan_kmalloc(cache, object, cache->object_size, flags, false);
> + /* Keep the tag that was set by kasan_slab_alloc(). */
> + return (void *)object;
> }
>
> void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object,
> size_t size, gfp_t flags)
> {
> - return ____kasan_kmalloc(cache, object, size, flags, true);
> + return ____kasan_kmalloc(cache, object, size, flags);
> }
> EXPORT_SYMBOL(__kasan_kmalloc);
>
> @@ -496,8 +528,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
> if (unlikely(!PageSlab(page)))
> return __kasan_kmalloc_large(object, size, flags);
> else
> - return ____kasan_kmalloc(page->slab_cache, object, size,
> - flags, true);
> + return ____kasan_kmalloc(page->slab_cache, object, size, flags);
> }
>
> void __kasan_kfree_large(void *ptr, unsigned long ip)
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index dd14e8870023..6a2882997f23 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -358,12 +358,51 @@ static inline bool kasan_byte_accessible(const void *addr)
>
> #else /* CONFIG_KASAN_HW_TAGS */
>
> -void kasan_poison(const void *address, size_t size, u8 value);
> -void kasan_unpoison(const void *address, size_t size);
> +/**
> + * kasan_poison - mark the memory range as unaccessible
> + * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
> + * @size - range size
> + * @value - value that's written to metadata for the range
> + *
> + * The size gets aligned to KASAN_GRANULE_SIZE before marking the range.
> + */
> +void kasan_poison(const void *addr, size_t size, u8 value);
> +
> +/**
> + * kasan_unpoison - mark the memory range as accessible
> + * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
> + * @size - range size
> + *
> + * For the tag-based modes, the @size gets aligned to KASAN_GRANULE_SIZE before
> + * marking the range.
> + * For the generic mode, the last granule of the memory range gets partially
> + * unpoisoned based on the @size.
> + */
> +void kasan_unpoison(const void *addr, size_t size);
> +
> bool kasan_byte_accessible(const void *addr);
>
> #endif /* CONFIG_KASAN_HW_TAGS */
>
> +#ifdef CONFIG_KASAN_GENERIC
> +
> +/**
> + * kasan_poison_last_granule - mark the last granule of the memory range as
> + * unaccessible
> + * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
> + * @size - range size
> + *
> + * This function is only available for the generic mode, as it's the only mode
> + * that has partially poisoned memory granules.
> + */
> +void kasan_poison_last_granule(const void *address, size_t size);
> +
> +#else /* CONFIG_KASAN_GENERIC */
> +
> +static inline void kasan_poison_last_granule(const void *address, size_t size) { }
> +
> +#endif /* CONFIG_KASAN_GENERIC */
> +
> /*
> * Exported functions for interfaces called from assembly or from generated
> * code. Declarations here to avoid warning about missing declarations.
> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
> index 1372a2fc0ca9..1ed7817e4ee6 100644
> --- a/mm/kasan/shadow.c
> +++ b/mm/kasan/shadow.c
> @@ -69,10 +69,6 @@ void *memcpy(void *dest, const void *src, size_t len)
> return __memcpy(dest, src, len);
> }
>
> -/*
> - * Poisons the shadow memory for 'size' bytes starting from 'addr'.
> - * Memory addresses should be aligned to KASAN_GRANULE_SIZE.
> - */
> void kasan_poison(const void *address, size_t size, u8 value)
> {
> void *shadow_start, *shadow_end;
> @@ -83,12 +79,12 @@ void kasan_poison(const void *address, size_t size, u8 value)
> * addresses to this function.
> */
> address = kasan_reset_tag(address);
> - size = round_up(size, KASAN_GRANULE_SIZE);
>
> /* Skip KFENCE memory if called explicitly outside of sl*b. */
> if (is_kfence_address(address))
> return;
>
> + size = round_up(size, KASAN_GRANULE_SIZE);
> shadow_start = kasan_mem_to_shadow(address);
> shadow_end = kasan_mem_to_shadow(address + size);
>
> @@ -96,6 +92,16 @@ void kasan_poison(const void *address, size_t size, u8 value)
> }
> EXPORT_SYMBOL(kasan_poison);
>
> +#ifdef CONFIG_KASAN_GENERIC
> +void kasan_poison_last_granule(const void *address, size_t size)
> +{
> + if (size & KASAN_GRANULE_MASK) {
> + u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
> + *shadow = size & KASAN_GRANULE_MASK;
> + }
> +}
> +#endif
The function declaration still needs to exist in the dead branch if
!IS_ENABLED(CONFIG_KASAN_GENERIC). It appears in that case it's declared
(in kasan.h), but not defined. We shouldn't get linker errors because
the optimizer should remove the dead branch. Nevertheless, is this code
generally acceptable?
> void kasan_unpoison(const void *address, size_t size)
> {
> u8 tag = get_tag(address);
> @@ -115,16 +121,12 @@ void kasan_unpoison(const void *address, size_t size)
> if (is_kfence_address(address))
> return;
>
> + /* Unpoison round_up(size, KASAN_GRANULE_SIZE) bytes. */
> kasan_poison(address, size, tag);
>
> - if (size & KASAN_GRANULE_MASK) {
> - u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
> -
> - if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
> - *shadow = tag;
> - else /* CONFIG_KASAN_GENERIC */
> - *shadow = size & KASAN_GRANULE_MASK;
> - }
> + /* Partially poison the last granule for the generic mode. */
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> + kasan_poison_last_granule(address, size);
> }
>
> #ifdef CONFIG_MEMORY_HOTPLUG
> diff --git a/mm/slub.c b/mm/slub.c
> index 176b1cb0d006..e564008c2329 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3565,8 +3565,7 @@ static void early_kmem_cache_node_alloc(int node)
> init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
> init_tracking(kmem_cache_node, n);
> #endif
> - n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node),
> - GFP_KERNEL);
> + n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL);
> page->freelist = get_freepointer(kmem_cache_node, n);
> page->inuse = 1;
> page->frozen = 0;
> --
> 2.30.0.365.g02bc693789-goog
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 03/12] kasan: optimize large kmalloc poisoning
2021-02-01 19:43 ` [PATCH 03/12] kasan: optimize large " Andrey Konovalov
@ 2021-02-02 16:57 ` Marco Elver
0 siblings, 0 replies; 35+ messages in thread
From: Marco Elver @ 2021-02-02 16:57 UTC (permalink / raw)
To: Andrey Konovalov
Cc: linux-arm-kernel, Branislav Rankov, Catalin Marinas,
Kevin Brodsky, Will Deacon, linux-kernel, kasan-dev, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Mon, Feb 01, 2021 at 08:43PM +0100, Andrey Konovalov wrote:
> Similarly to kasan_kmalloc(), kasan_kmalloc_large() doesn't need
> to unpoison the object as it as already unpoisoned by alloc_pages()
> (or by ksize() for krealloc()).
>
> This patch changes kasan_kmalloc_large() to only poison the redzone.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
> ---
> mm/kasan/common.c | 20 +++++++++++++++-----
> 1 file changed, 15 insertions(+), 5 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 128cb330ca73..a7eb553c8e91 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -494,7 +494,6 @@ EXPORT_SYMBOL(__kasan_kmalloc);
> void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> gfp_t flags)
> {
> - struct page *page;
> unsigned long redzone_start;
> unsigned long redzone_end;
>
> @@ -504,12 +503,23 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
> if (unlikely(ptr == NULL))
> return NULL;
>
> - page = virt_to_page(ptr);
> + /*
> + * The object has already been unpoisoned by kasan_alloc_pages() for
> + * alloc_pages() or by ksize() for krealloc().
> + */
> +
> + /*
> + * The redzone has byte-level precision for the generic mode.
> + * Partially poison the last object granule to cover the unaligned
> + * part of the redzone.
> + */
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> + kasan_poison_last_granule(ptr, size);
> +
> + /* Poison the aligned part of the redzone. */
> redzone_start = round_up((unsigned long)(ptr + size),
> KASAN_GRANULE_SIZE);
> - redzone_end = (unsigned long)ptr + page_size(page);
> -
> - kasan_unpoison(ptr, size);
> + redzone_end = (unsigned long)ptr + page_size(virt_to_page(ptr));
> kasan_poison((void *)redzone_start, redzone_end - redzone_start,
> KASAN_PAGE_REDZONE);
>
> --
> 2.30.0.365.g02bc693789-goog
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 04/12] kasan: clean up setting free info in kasan_slab_free
2021-02-01 19:43 ` [PATCH 04/12] kasan: clean up setting free info in kasan_slab_free Andrey Konovalov
@ 2021-02-02 17:03 ` Marco Elver
0 siblings, 0 replies; 35+ messages in thread
From: Marco Elver @ 2021-02-02 17:03 UTC (permalink / raw)
To: Andrey Konovalov
Cc: linux-arm-kernel, Branislav Rankov, Catalin Marinas,
Kevin Brodsky, Will Deacon, linux-kernel, kasan-dev, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Mon, Feb 01, 2021 at 08:43PM +0100, Andrey Konovalov wrote:
> Put kasan_stack_collection_enabled() check and kasan_set_free_info()
> calls next to each other.
>
> The way this was previously implemented was a minor optimization that
> relied of the the fact that kasan_stack_collection_enabled() is always
> true for generic KASAN. The confusion that this brings outweights saving
> a few instructions.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
> ---
> mm/kasan/common.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index a7eb553c8e91..086bb77292b6 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -350,13 +350,11 @@ static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
>
> kasan_poison(object, cache->object_size, KASAN_KMALLOC_FREE);
>
> - if (!kasan_stack_collection_enabled())
> - return false;
> -
> if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine))
> return false;
>
> - kasan_set_free_info(cache, object, tag);
> + if (kasan_stack_collection_enabled())
> + kasan_set_free_info(cache, object, tag);
>
> return kasan_quarantine_put(cache, object);
> }
> --
> 2.30.0.365.g02bc693789-goog
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 02/12] kasan, mm: optimize kmalloc poisoning
2021-02-02 16:25 ` Marco Elver
@ 2021-02-02 17:15 ` Andrey Konovalov
2021-02-02 17:39 ` Marco Elver
0 siblings, 1 reply; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-02 17:15 UTC (permalink / raw)
To: Marco Elver
Cc: Linux ARM, Branislav Rankov, Catalin Marinas, Kevin Brodsky,
Will Deacon, LKML, kasan-dev, Linux Memory Management List,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Tue, Feb 2, 2021 at 5:25 PM Marco Elver <elver@google.com> wrote:
>
> > +#ifdef CONFIG_KASAN_GENERIC
> > +
> > +/**
> > + * kasan_poison_last_granule - mark the last granule of the memory range as
> > + * unaccessible
> > + * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
> > + * @size - range size
> > + *
> > + * This function is only available for the generic mode, as it's the only mode
> > + * that has partially poisoned memory granules.
> > + */
> > +void kasan_poison_last_granule(const void *address, size_t size);
> > +
> > +#else /* CONFIG_KASAN_GENERIC */
> > +
> > +static inline void kasan_poison_last_granule(const void *address, size_t size) { }
^
> > +
> > +#endif /* CONFIG_KASAN_GENERIC */
> > +
> > /*
> > * Exported functions for interfaces called from assembly or from generated
> > * code. Declarations here to avoid warning about missing declarations.
> > @@ -96,6 +92,16 @@ void kasan_poison(const void *address, size_t size, u8 value)
> > }
> > EXPORT_SYMBOL(kasan_poison);
> >
> > +#ifdef CONFIG_KASAN_GENERIC
> > +void kasan_poison_last_granule(const void *address, size_t size)
> > +{
> > + if (size & KASAN_GRANULE_MASK) {
> > + u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
> > + *shadow = size & KASAN_GRANULE_MASK;
> > + }
> > +}
> > +#endif
>
> The function declaration still needs to exist in the dead branch if
> !IS_ENABLED(CONFIG_KASAN_GENERIC). It appears in that case it's declared
> (in kasan.h), but not defined. We shouldn't get linker errors because
> the optimizer should remove the dead branch. Nevertheless, is this code
> generally acceptable?
The function is defined as empty when !CONFIG_KASAN_GENERIC, see above.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 02/12] kasan, mm: optimize kmalloc poisoning
2021-02-02 17:15 ` Andrey Konovalov
@ 2021-02-02 17:39 ` Marco Elver
0 siblings, 0 replies; 35+ messages in thread
From: Marco Elver @ 2021-02-02 17:39 UTC (permalink / raw)
To: Andrey Konovalov
Cc: Linux ARM, Branislav Rankov, Catalin Marinas, Kevin Brodsky,
Will Deacon, LKML, kasan-dev, Linux Memory Management List,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Tue, 2 Feb 2021 at 18:16, Andrey Konovalov <andreyknvl@google.com> wrote:
>
> On Tue, Feb 2, 2021 at 5:25 PM Marco Elver <elver@google.com> wrote:
> >
> > > +#ifdef CONFIG_KASAN_GENERIC
> > > +
> > > +/**
> > > + * kasan_poison_last_granule - mark the last granule of the memory range as
> > > + * unaccessible
> > > + * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
> > > + * @size - range size
> > > + *
> > > + * This function is only available for the generic mode, as it's the only mode
> > > + * that has partially poisoned memory granules.
> > > + */
> > > +void kasan_poison_last_granule(const void *address, size_t size);
> > > +
> > > +#else /* CONFIG_KASAN_GENERIC */
> > > +
> > > +static inline void kasan_poison_last_granule(const void *address, size_t size) { }
>
> ^
>
> > > +
> > > +#endif /* CONFIG_KASAN_GENERIC */
> > > +
> > > /*
> > > * Exported functions for interfaces called from assembly or from generated
> > > * code. Declarations here to avoid warning about missing declarations.
>
> > > @@ -96,6 +92,16 @@ void kasan_poison(const void *address, size_t size, u8 value)
> > > }
> > > EXPORT_SYMBOL(kasan_poison);
> > >
> > > +#ifdef CONFIG_KASAN_GENERIC
> > > +void kasan_poison_last_granule(const void *address, size_t size)
> > > +{
> > > + if (size & KASAN_GRANULE_MASK) {
> > > + u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
> > > + *shadow = size & KASAN_GRANULE_MASK;
> > > + }
> > > +}
> > > +#endif
> >
> > The function declaration still needs to exist in the dead branch if
> > !IS_ENABLED(CONFIG_KASAN_GENERIC). It appears in that case it's declared
> > (in kasan.h), but not defined. We shouldn't get linker errors because
> > the optimizer should remove the dead branch. Nevertheless, is this code
> > generally acceptable?
>
> The function is defined as empty when !CONFIG_KASAN_GENERIC, see above.
I missed that, thanks.
Reviewed-by: Marco Elver <elver@google.com>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 01/12] kasan, mm: don't save alloc stacks twice
2021-02-02 16:06 ` Marco Elver
@ 2021-02-02 18:01 ` Andrey Konovalov
2021-02-02 18:40 ` Marco Elver
0 siblings, 1 reply; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-02 18:01 UTC (permalink / raw)
To: Marco Elver
Cc: Linux ARM, Branislav Rankov, Catalin Marinas, Kevin Brodsky,
Will Deacon, LKML, kasan-dev, Linux Memory Management List,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Tue, Feb 2, 2021 at 5:06 PM Marco Elver <elver@google.com> wrote:
>
> On Mon, Feb 01, 2021 at 08:43PM +0100, Andrey Konovalov wrote:
> > Currently KASAN saves allocation stacks in both kasan_slab_alloc() and
> > kasan_kmalloc() annotations. This patch changes KASAN to save allocation
> > stacks for slab objects from kmalloc caches in kasan_kmalloc() only,
> > and stacks for other slab objects in kasan_slab_alloc() only.
> >
> > This change requires ____kasan_kmalloc() knowing whether the object
> > belongs to a kmalloc cache. This is implemented by adding a flag field
> > to the kasan_info structure. That flag is only set for kmalloc caches
> > via a new kasan_cache_create_kmalloc() annotation.
> >
> > Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
>
> Reviewed-by: Marco Elver <elver@google.com>
>
> > ---
> > include/linux/kasan.h | 9 +++++++++
> > mm/kasan/common.c | 18 ++++++++++++++----
> > mm/slab_common.c | 1 +
> > 3 files changed, 24 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> > index 6d8f3227c264..2d5de4092185 100644
> > --- a/include/linux/kasan.h
> > +++ b/include/linux/kasan.h
> > @@ -83,6 +83,7 @@ static inline void kasan_disable_current(void) {}
> > struct kasan_cache {
> > int alloc_meta_offset;
> > int free_meta_offset;
> > + bool is_kmalloc;
> > };
> >
> > #ifdef CONFIG_KASAN_HW_TAGS
> > @@ -143,6 +144,13 @@ static __always_inline void kasan_cache_create(struct kmem_cache *cache,
> > __kasan_cache_create(cache, size, flags);
> > }
> >
> > +void __kasan_cache_create_kmalloc(struct kmem_cache *cache);
> > +static __always_inline void kasan_cache_create_kmalloc(struct kmem_cache *cache)
> > +{
> > + if (kasan_enabled())
> > + __kasan_cache_create_kmalloc(cache);
> > +}
> > +
> > size_t __kasan_metadata_size(struct kmem_cache *cache);
> > static __always_inline size_t kasan_metadata_size(struct kmem_cache *cache)
> > {
> > @@ -278,6 +286,7 @@ static inline void kasan_free_pages(struct page *page, unsigned int order) {}
> > static inline void kasan_cache_create(struct kmem_cache *cache,
> > unsigned int *size,
> > slab_flags_t *flags) {}
> > +static inline void kasan_cache_create_kmalloc(struct kmem_cache *cache) {}
> > static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
> > static inline void kasan_poison_slab(struct page *page) {}
> > static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
> > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > index fe852f3cfa42..374049564ea3 100644
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -210,6 +210,11 @@ void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > *size = optimal_size;
> > }
> >
> > +void __kasan_cache_create_kmalloc(struct kmem_cache *cache)
> > +{
> > + cache->kasan_info.is_kmalloc = true;
> > +}
> > +
> > size_t __kasan_metadata_size(struct kmem_cache *cache)
> > {
> > if (!kasan_stack_collection_enabled())
> > @@ -394,17 +399,22 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
> > }
> > }
> >
> > -static void set_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> > +static void set_alloc_info(struct kmem_cache *cache, void *object,
> > + gfp_t flags, bool kmalloc)
> > {
> > struct kasan_alloc_meta *alloc_meta;
> >
> > + /* Don't save alloc info for kmalloc caches in kasan_slab_alloc(). */
> > + if (cache->kasan_info.is_kmalloc && !kmalloc)
> > + return;
> > +
> > alloc_meta = kasan_get_alloc_meta(cache, object);
> > if (alloc_meta)
> > kasan_set_track(&alloc_meta->alloc_track, flags);
> > }
> >
> > static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > - size_t size, gfp_t flags, bool keep_tag)
> > + size_t size, gfp_t flags, bool kmalloc)
> > {
> > unsigned long redzone_start;
> > unsigned long redzone_end;
> > @@ -423,7 +433,7 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > KASAN_GRANULE_SIZE);
> > redzone_end = round_up((unsigned long)object + cache->object_size,
> > KASAN_GRANULE_SIZE);
> > - tag = assign_tag(cache, object, false, keep_tag);
> > + tag = assign_tag(cache, object, false, kmalloc);
> >
> > /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */
> > kasan_unpoison(set_tag(object, tag), size);
> > @@ -431,7 +441,7 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> > KASAN_KMALLOC_REDZONE);
> >
> > if (kasan_stack_collection_enabled())
> > - set_alloc_info(cache, (void *)object, flags);
> > + set_alloc_info(cache, (void *)object, flags, kmalloc);
>
> It doesn't bother me too much, but: 'bool kmalloc' shadows function
> 'kmalloc' so this is technically fine, but using 'kmalloc' as the
> variable name here might be confusing and there is a small chance it
> might cause problems in a future refactor.
Good point. Does "is_kmalloc" sound good?
Thanks!
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 10/12] arm64: kasan: simplify and inline MTE functions
2021-02-02 15:42 ` Catalin Marinas
@ 2021-02-02 18:04 ` Andrey Konovalov
0 siblings, 0 replies; 35+ messages in thread
From: Andrey Konovalov @ 2021-02-02 18:04 UTC (permalink / raw)
To: Catalin Marinas
Cc: Linux ARM, Marco Elver, Kevin Brodsky, Will Deacon,
Branislav Rankov, kasan-dev, LKML, Linux Memory Management List,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Tue, Feb 2, 2021 at 4:42 PM Catalin Marinas <catalin.marinas@arm.com> wrote:
>
> On Mon, Feb 01, 2021 at 08:43:34PM +0100, Andrey Konovalov wrote:
> > +/*
> > + * Assign allocation tags for a region of memory based on the pointer tag.
> > + * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
> > + * size must be non-zero and MTE_GRANULE_SIZE aligned.
> > + */
>
> OK, so we rely on the caller to sanity-check the range. Fine by me but I
> can see (un)poison_range() only doing this for the size. Do we guarantee
> that the start address is aligned?
See the previous patch in the series. kasan_poison() checks and warns
on both unaligned addr and size. kasan_unpoison() checks addr and
rounds up size.
> > +static __always_inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
> > +{
> > + u64 curr, end;
> > +
> > + if (!size)
> > + return;
> > +
> > + curr = (u64)__tag_set(addr, tag);
> > + end = curr + size;
> > +
> > + do {
> > + /*
> > + * 'asm volatile' is required to prevent the compiler to move
> > + * the statement outside of the loop.
> > + */
> > + asm volatile(__MTE_PREAMBLE "stg %0, [%0]"
> > + :
> > + : "r" (curr)
> > + : "memory");
> > +
> > + curr += MTE_GRANULE_SIZE;
> > + } while (curr != end);
> > +}
> >
> > void mte_enable_kernel_sync(void);
> > void mte_enable_kernel_async(void);
> > @@ -47,10 +95,12 @@ static inline u8 mte_get_mem_tag(void *addr)
> > {
> > return 0xFF;
> > }
> > +
> > static inline u8 mte_get_random_tag(void)
> > {
> > return 0xFF;
> > }
> > +
> > static inline void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
>
> This function used to return a pointer and that's what the dummy static
> inline does here. However, the new mte_set_mem_tag_range() doesn't
> return anything. We should have consistency between the two (the new
> static void definition is fine by me).
Right, forgot to update the empty function definition. Will do in v2.
>
> Otherwise the patch looks fine.
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Thanks!
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 01/12] kasan, mm: don't save alloc stacks twice
2021-02-02 18:01 ` Andrey Konovalov
@ 2021-02-02 18:40 ` Marco Elver
0 siblings, 0 replies; 35+ messages in thread
From: Marco Elver @ 2021-02-02 18:40 UTC (permalink / raw)
To: Andrey Konovalov
Cc: Linux ARM, Branislav Rankov, Catalin Marinas, Kevin Brodsky,
Will Deacon, LKML, kasan-dev, Linux Memory Management List,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Tue, 2 Feb 2021 at 19:01, 'Andrey Konovalov' via kasan-dev
<kasan-dev@googlegroups.com> wrote:
[...]
> > > @@ -83,6 +83,7 @@ static inline void kasan_disable_current(void) {}
> > > struct kasan_cache {
> > > int alloc_meta_offset;
> > > int free_meta_offset;
> > > + bool is_kmalloc;
[...]
> > > if (kasan_stack_collection_enabled())
> > > - set_alloc_info(cache, (void *)object, flags);
> > > + set_alloc_info(cache, (void *)object, flags, kmalloc);
> >
> > It doesn't bother me too much, but: 'bool kmalloc' shadows function
> > 'kmalloc' so this is technically fine, but using 'kmalloc' as the
> > variable name here might be confusing and there is a small chance it
> > might cause problems in a future refactor.
>
> Good point. Does "is_kmalloc" sound good?
Sure, that's also consistent with the new struct field.
Thanks,
-- Marco
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 05/12] kasan: unify large kfree checks
2021-02-01 19:43 ` [PATCH 05/12] kasan: unify large kfree checks Andrey Konovalov
@ 2021-02-03 12:13 ` Marco Elver
0 siblings, 0 replies; 35+ messages in thread
From: Marco Elver @ 2021-02-03 12:13 UTC (permalink / raw)
To: Andrey Konovalov
Cc: linux-arm-kernel, Branislav Rankov, Catalin Marinas,
Kevin Brodsky, Will Deacon, linux-kernel, kasan-dev, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Mon, Feb 01, 2021 at 08:43PM +0100, Andrey Konovalov wrote:
> Unify checks in kasan_kfree_large() and in kasan_slab_free_mempool()
> for large allocations as it's done for small kfree() allocations.
>
> With this change, kasan_slab_free_mempool() starts checking that the
> first byte of the memory that's being freed is accessible.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
> ---
> include/linux/kasan.h | 16 ++++++++--------
> mm/kasan/common.c | 36 ++++++++++++++++++++++++++----------
> 2 files changed, 34 insertions(+), 18 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 2d5de4092185..d53ea3c047bc 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -200,6 +200,13 @@ static __always_inline bool kasan_slab_free(struct kmem_cache *s, void *object)
> return false;
> }
>
> +void __kasan_kfree_large(void *ptr, unsigned long ip);
> +static __always_inline void kasan_kfree_large(void *ptr)
> +{
> + if (kasan_enabled())
> + __kasan_kfree_large(ptr, _RET_IP_);
> +}
> +
> void __kasan_slab_free_mempool(void *ptr, unsigned long ip);
> static __always_inline void kasan_slab_free_mempool(void *ptr)
> {
> @@ -247,13 +254,6 @@ static __always_inline void * __must_check kasan_krealloc(const void *object,
> return (void *)object;
> }
>
> -void __kasan_kfree_large(void *ptr, unsigned long ip);
> -static __always_inline void kasan_kfree_large(void *ptr)
> -{
> - if (kasan_enabled())
> - __kasan_kfree_large(ptr, _RET_IP_);
> -}
> -
> /*
> * Unlike kasan_check_read/write(), kasan_check_byte() is performed even for
> * the hardware tag-based mode that doesn't rely on compiler instrumentation.
> @@ -302,6 +302,7 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object)
> {
> return false;
> }
> +static inline void kasan_kfree_large(void *ptr) {}
> static inline void kasan_slab_free_mempool(void *ptr) {}
> static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> gfp_t flags)
> @@ -322,7 +323,6 @@ static inline void *kasan_krealloc(const void *object, size_t new_size,
> {
> return (void *)object;
> }
> -static inline void kasan_kfree_large(void *ptr) {}
> static inline bool kasan_check_byte(const void *address)
> {
> return true;
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 086bb77292b6..9c64a00bbf9c 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -364,6 +364,31 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> return ____kasan_slab_free(cache, object, ip, true);
> }
>
> +static bool ____kasan_kfree_large(void *ptr, unsigned long ip)
> +{
> + if (ptr != page_address(virt_to_head_page(ptr))) {
> + kasan_report_invalid_free(ptr, ip);
> + return true;
> + }
> +
> + if (!kasan_byte_accessible(ptr)) {
> + kasan_report_invalid_free(ptr, ip);
> + return true;
> + }
> +
> + /*
> + * The object will be poisoned by kasan_free_pages() or
> + * kasan_slab_free_mempool().
> + */
> +
> + return false;
> +}
> +
> +void __kasan_kfree_large(void *ptr, unsigned long ip)
> +{
> + ____kasan_kfree_large(ptr, ip);
> +}
> +
> void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
> {
> struct page *page;
> @@ -377,10 +402,8 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
> * KMALLOC_MAX_SIZE, and kmalloc falls back onto page_alloc.
> */
> if (unlikely(!PageSlab(page))) {
> - if (ptr != page_address(page)) {
> - kasan_report_invalid_free(ptr, ip);
> + if (____kasan_kfree_large(ptr, ip))
> return;
> - }
> kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE);
> } else {
> ____kasan_slab_free(page->slab_cache, ptr, ip, false);
> @@ -539,13 +562,6 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
> return ____kasan_kmalloc(page->slab_cache, object, size, flags);
> }
>
> -void __kasan_kfree_large(void *ptr, unsigned long ip)
> -{
> - if (ptr != page_address(virt_to_head_page(ptr)))
> - kasan_report_invalid_free(ptr, ip);
> - /* The object will be poisoned by kasan_free_pages(). */
> -}
> -
> bool __kasan_check_byte(const void *address, unsigned long ip)
> {
> if (!kasan_byte_accessible(address)) {
> --
> 2.30.0.365.g02bc693789-goog
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 08/12] kasan, mm: optimize krealloc poisoning
2021-02-01 19:43 ` [PATCH 08/12] kasan, mm: optimize krealloc poisoning Andrey Konovalov
@ 2021-02-03 14:34 ` Marco Elver
0 siblings, 0 replies; 35+ messages in thread
From: Marco Elver @ 2021-02-03 14:34 UTC (permalink / raw)
To: Andrey Konovalov
Cc: linux-arm-kernel, Branislav Rankov, Catalin Marinas,
Kevin Brodsky, Will Deacon, linux-kernel, kasan-dev, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Mon, Feb 01, 2021 at 08:43PM +0100, Andrey Konovalov wrote:
> Currently, krealloc() always calls ksize(), which unpoisons the whole
> object including the redzone. This is inefficient, as kasan_krealloc()
> repoisons the redzone for objects that fit into the same buffer.
>
> This patch changes krealloc() instrumentation to use uninstrumented
> __ksize() that doesn't unpoison the memory. Instead, kasan_kreallos()
> is changed to unpoison the memory excluding the redzone.
>
> For objects that don't fit into the old allocation, this patch disables
> KASAN accessibility checks when copying memory into a new object instead
> of unpoisoning it.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
> mm/kasan/common.c | 12 ++++++++++--
> mm/slab_common.c | 20 ++++++++++++++------
> 2 files changed, 24 insertions(+), 8 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 9c64a00bbf9c..a51d6ea580b0 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -476,7 +476,7 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
>
> /*
> * The object has already been unpoisoned by kasan_slab_alloc() for
> - * kmalloc() or by ksize() for krealloc().
> + * kmalloc() or by kasan_krealloc() for krealloc().
> */
>
> /*
> @@ -526,7 +526,7 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
>
> /*
> * The object has already been unpoisoned by kasan_alloc_pages() for
> - * alloc_pages() or by ksize() for krealloc().
> + * alloc_pages() or by kasan_krealloc() for krealloc().
> */
>
> /*
> @@ -554,8 +554,16 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
> if (unlikely(object == ZERO_SIZE_PTR))
> return (void *)object;
>
> + /*
> + * Unpoison the object's data.
> + * Part of it might already have been unpoisoned, but it's unknown
> + * how big that part is.
> + */
> + kasan_unpoison(object, size);
> +
> page = virt_to_head_page(object);
>
> + /* Piggy-back on kmalloc() instrumentation to poison the redzone. */
> if (unlikely(!PageSlab(page)))
> return __kasan_kmalloc_large(object, size, flags);
> else
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index dad70239b54c..821f657d38b5 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -1140,19 +1140,27 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
> void *ret;
> size_t ks;
>
> - if (likely(!ZERO_OR_NULL_PTR(p)) && !kasan_check_byte(p))
> - return NULL;
> -
> - ks = ksize(p);
> + /* Don't use instrumented ksize to allow precise KASAN poisoning. */
> + if (likely(!ZERO_OR_NULL_PTR(p))) {
> + if (!kasan_check_byte(p))
> + return NULL;
> + ks = __ksize(p);
> + } else
> + ks = 0;
>
This unfortunately broke KFENCE:
https://syzkaller.appspot.com/bug?extid=e444e1006d07feef0ef3 + various
other false positives.
We need to use ksize() here, as __ksize() is unaware of KFENCE. Or
somehow add the same check here that ksize() uses to get the real object
size.
> + /* If the object still fits, repoison it precisely. */
> if (ks >= new_size) {
> p = kasan_krealloc((void *)p, new_size, flags);
> return (void *)p;
> }
>
> ret = kmalloc_track_caller(new_size, flags);
> - if (ret && p)
> - memcpy(ret, p, ks);
> + if (ret && p) {
> + /* Disable KASAN checks as the object's redzone is accessed. */
> + kasan_disable_current();
> + memcpy(ret, kasan_reset_tag(p), ks);
> + kasan_enable_current();
> + }
>
> return ret;
> }
> --
> 2.30.0.365.g02bc693789-goog
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 06/12] kasan: rework krealloc tests
2021-02-01 19:43 ` [PATCH 06/12] kasan: rework krealloc tests Andrey Konovalov
@ 2021-02-03 14:48 ` Marco Elver
0 siblings, 0 replies; 35+ messages in thread
From: Marco Elver @ 2021-02-03 14:48 UTC (permalink / raw)
To: Andrey Konovalov
Cc: linux-arm-kernel, Branislav Rankov, Catalin Marinas,
Kevin Brodsky, Will Deacon, linux-kernel, kasan-dev, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Mon, Feb 01, 2021 at 08:43PM +0100, Andrey Konovalov wrote:
> This patch reworks KASAN-KUnit tests for krealloc() to:
>
> 1. Check both slab and page_alloc based krealloc() implementations.
> 2. Allow at least one full granule to fit between old and new sizes for
> each KASAN mode, and check accesses to that granule accordingly.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
> ---
> lib/test_kasan.c | 91 ++++++++++++++++++++++++++++++++++++++++++------
> 1 file changed, 81 insertions(+), 10 deletions(-)
>
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> index 5699e43ca01b..2bb52853f341 100644
> --- a/lib/test_kasan.c
> +++ b/lib/test_kasan.c
> @@ -258,11 +258,14 @@ static void kmalloc_large_oob_right(struct kunit *test)
> kfree(ptr);
> }
>
> -static void kmalloc_oob_krealloc_more(struct kunit *test)
> +static void krealloc_more_oob_helper(struct kunit *test,
> + size_t size1, size_t size2)
> {
> char *ptr1, *ptr2;
> - size_t size1 = 17;
> - size_t size2 = 19;
> + size_t middle;
> +
> + KUNIT_ASSERT_LT(test, size1, size2);
> + middle = size1 + (size2 - size1) / 2;
>
> ptr1 = kmalloc(size1, GFP_KERNEL);
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);
> @@ -270,15 +273,31 @@ static void kmalloc_oob_krealloc_more(struct kunit *test)
> ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
>
> - KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size2 + OOB_TAG_OFF] = 'x');
> + /* All offsets up to size2 must be accessible. */
> + ptr2[size1 - 1] = 'x';
> + ptr2[size1] = 'x';
> + ptr2[middle] = 'x';
> + ptr2[size2 - 1] = 'x';
> +
> + /* Generic mode is precise, so unaligned size2 must be inaccessible. */
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size2] = 'x');
> +
> + /* For all modes first aligned offset after size2 must be inaccessible. */
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + ptr2[round_up(size2, KASAN_GRANULE_SIZE)] = 'x');
> +
> kfree(ptr2);
> }
>
> -static void kmalloc_oob_krealloc_less(struct kunit *test)
> +static void krealloc_less_oob_helper(struct kunit *test,
> + size_t size1, size_t size2)
> {
> char *ptr1, *ptr2;
> - size_t size1 = 17;
> - size_t size2 = 15;
> + size_t middle;
> +
> + KUNIT_ASSERT_LT(test, size2, size1);
> + middle = size2 + (size1 - size2) / 2;
>
> ptr1 = kmalloc(size1, GFP_KERNEL);
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);
> @@ -286,10 +305,60 @@ static void kmalloc_oob_krealloc_less(struct kunit *test)
> ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
>
> - KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size2 + OOB_TAG_OFF] = 'x');
> + /* Must be accessible for all modes. */
> + ptr2[size2 - 1] = 'x';
> +
> + /* Generic mode is precise, so unaligned size2 must be inaccessible. */
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size2] = 'x');
> +
> + /* For all modes first aligned offset after size2 must be inaccessible. */
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + ptr2[round_up(size2, KASAN_GRANULE_SIZE)] = 'x');
> +
> + /*
> + * For all modes both middle and size1 should land in separate granules
middle, size1, and size2?
> + * and thus be inaccessible.
> + */
> + KUNIT_EXPECT_LE(test, round_up(size2, KASAN_GRANULE_SIZE),
> + round_down(middle, KASAN_GRANULE_SIZE));
> + KUNIT_EXPECT_LE(test, round_up(middle, KASAN_GRANULE_SIZE),
> + round_down(size1, KASAN_GRANULE_SIZE));
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr2[middle] = 'x');
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size1 - 1] = 'x');
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr2[size1] = 'x');
> +
> kfree(ptr2);
> }
>
> +static void krealloc_more_oob(struct kunit *test)
> +{
> + krealloc_more_oob_helper(test, 201, 235);
> +}
> +
> +static void krealloc_less_oob(struct kunit *test)
> +{
> + krealloc_less_oob_helper(test, 235, 201);
> +}
> +
> +static void krealloc_pagealloc_more_oob(struct kunit *test)
> +{
> + /* page_alloc fallback in only implemented for SLUB. */
> + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
> +
> + krealloc_more_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 201,
> + KMALLOC_MAX_CACHE_SIZE + 235);
> +}
> +
> +static void krealloc_pagealloc_less_oob(struct kunit *test)
> +{
> + /* page_alloc fallback in only implemented for SLUB. */
> + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
> +
> + krealloc_less_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 235,
> + KMALLOC_MAX_CACHE_SIZE + 201);
> +}
> +
> static void kmalloc_oob_16(struct kunit *test)
> {
> struct {
> @@ -983,8 +1052,10 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> KUNIT_CASE(pagealloc_oob_right),
> KUNIT_CASE(pagealloc_uaf),
> KUNIT_CASE(kmalloc_large_oob_right),
> - KUNIT_CASE(kmalloc_oob_krealloc_more),
> - KUNIT_CASE(kmalloc_oob_krealloc_less),
> + KUNIT_CASE(krealloc_more_oob),
> + KUNIT_CASE(krealloc_less_oob),
> + KUNIT_CASE(krealloc_pagealloc_more_oob),
> + KUNIT_CASE(krealloc_pagealloc_less_oob),
> KUNIT_CASE(kmalloc_oob_16),
> KUNIT_CASE(kmalloc_uaf_16),
> KUNIT_CASE(kmalloc_oob_in_memset),
> --
> 2.30.0.365.g02bc693789-goog
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 07/12] kasan, mm: remove krealloc side-effect
2021-02-01 19:43 ` [PATCH 07/12] kasan, mm: remove krealloc side-effect Andrey Konovalov
@ 2021-02-03 15:10 ` Marco Elver
0 siblings, 0 replies; 35+ messages in thread
From: Marco Elver @ 2021-02-03 15:10 UTC (permalink / raw)
To: Andrey Konovalov
Cc: linux-arm-kernel, Branislav Rankov, Catalin Marinas,
Kevin Brodsky, Will Deacon, linux-kernel, kasan-dev, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Mon, Feb 01, 2021 at 08:43PM +0100, Andrey Konovalov wrote:
> Currently, if krealloc() is called on a freed object with KASAN enabled,
> it allocates and returns a new object, but doesn't copy any memory from
> the old one as ksize() returns 0. This makes a caller believe that
> krealloc() succeeded (KASAN report is printed though).
>
> This patch adds an accessibility check into __do_krealloc(). If the check
> fails, krealloc() returns NULL. This check duplicates the one in ksize();
> this is fixed in the following patch.
I think "side-effect" is ambiguous, because either way behaviour of
krealloc differs from a kernel with KASAN disabled. Something like
"kasan, mm: fail krealloc on already freed object" perhaps?
> This patch also adds a KASAN-KUnit test to check krealloc() behaviour
> when it's called on a freed object.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
> ---
> lib/test_kasan.c | 20 ++++++++++++++++++++
> mm/slab_common.c | 3 +++
> 2 files changed, 23 insertions(+)
>
> diff --git a/lib/test_kasan.c b/lib/test_kasan.c
> index 2bb52853f341..61bc894d9f7e 100644
> --- a/lib/test_kasan.c
> +++ b/lib/test_kasan.c
> @@ -359,6 +359,25 @@ static void krealloc_pagealloc_less_oob(struct kunit *test)
> KMALLOC_MAX_CACHE_SIZE + 201);
> }
>
> +/*
> + * Check that krealloc() detects a use-after-free, returns NULL,
> + * and doesn't unpoison the freed object.
> + */
> +static void krealloc_uaf(struct kunit *test)
> +{
> + char *ptr1, *ptr2;
> + int size1 = 201;
> + int size2 = 235;
> +
> + ptr1 = kmalloc(size1, GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);
> + kfree(ptr1);
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr2 = krealloc(ptr1, size2, GFP_KERNEL));
> + KUNIT_ASSERT_PTR_EQ(test, (void *)ptr2, NULL);
> + KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1);
> +}
> +
> static void kmalloc_oob_16(struct kunit *test)
> {
> struct {
> @@ -1056,6 +1075,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> KUNIT_CASE(krealloc_less_oob),
> KUNIT_CASE(krealloc_pagealloc_more_oob),
> KUNIT_CASE(krealloc_pagealloc_less_oob),
> + KUNIT_CASE(krealloc_uaf),
> KUNIT_CASE(kmalloc_oob_16),
> KUNIT_CASE(kmalloc_uaf_16),
> KUNIT_CASE(kmalloc_oob_in_memset),
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 39d1a8ff9bb8..dad70239b54c 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -1140,6 +1140,9 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
> void *ret;
> size_t ks;
>
> + if (likely(!ZERO_OR_NULL_PTR(p)) && !kasan_check_byte(p))
> + return NULL;
> +
> ks = ksize(p);
>
> if (ks >= new_size) {
> --
> 2.30.0.365.g02bc693789-goog
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 09/12] kasan: ensure poisoning size alignment
2021-02-01 19:43 ` [PATCH 09/12] kasan: ensure poisoning size alignment Andrey Konovalov
@ 2021-02-03 15:31 ` Marco Elver
0 siblings, 0 replies; 35+ messages in thread
From: Marco Elver @ 2021-02-03 15:31 UTC (permalink / raw)
To: Andrey Konovalov
Cc: linux-arm-kernel, Branislav Rankov, Catalin Marinas,
Kevin Brodsky, Will Deacon, linux-kernel, kasan-dev, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Mon, Feb 01, 2021 at 08:43PM +0100, Andrey Konovalov wrote:
> A previous changes d99f6a10c161 ("kasan: don't round_up too much")
> attempted to simplify the code by adding a round_up(size) call into
> kasan_poison(). While this allows to have less round_up() calls around
> the code, this results in round_up() being called multiple times.
>
> This patch removes round_up() of size from kasan_poison() and ensures
> that all callers round_up() the size explicitly. This patch also adds
> WARN_ON() alignment checks for address and size to kasan_poison() and
> kasan_unpoison().
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Marco Elver <elver@google.com>
> ---
> mm/kasan/common.c | 9 ++++++---
> mm/kasan/kasan.h | 33 ++++++++++++++++++++-------------
> mm/kasan/shadow.c | 37 ++++++++++++++++++++++---------------
> 3 files changed, 48 insertions(+), 31 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index a51d6ea580b0..5691cca69397 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -261,7 +261,8 @@ void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
>
> void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
> {
> - kasan_poison(object, cache->object_size, KASAN_KMALLOC_REDZONE);
> + kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE),
> + KASAN_KMALLOC_REDZONE);
> }
>
> /*
> @@ -348,7 +349,8 @@ static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> return true;
> }
>
> - kasan_poison(object, cache->object_size, KASAN_KMALLOC_FREE);
> + kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE),
> + KASAN_KMALLOC_FREE);
>
> if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine))
> return false;
> @@ -490,7 +492,8 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> /* Poison the aligned part of the redzone. */
> redzone_start = round_up((unsigned long)(object + size),
> KASAN_GRANULE_SIZE);
> - redzone_end = (unsigned long)object + cache->object_size;
> + redzone_end = round_up((unsigned long)(object + cache->object_size),
> + KASAN_GRANULE_SIZE);
> kasan_poison((void *)redzone_start, redzone_end - redzone_start,
> KASAN_KMALLOC_REDZONE);
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 6a2882997f23..2f7400a3412f 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -321,30 +321,37 @@ static inline u8 kasan_random_tag(void) { return 0; }
>
> #ifdef CONFIG_KASAN_HW_TAGS
>
> -static inline void kasan_poison(const void *address, size_t size, u8 value)
> +static inline void kasan_poison(const void *addr, size_t size, u8 value)
> {
> - address = kasan_reset_tag(address);
> + addr = kasan_reset_tag(addr);
>
> /* Skip KFENCE memory if called explicitly outside of sl*b. */
> - if (is_kfence_address(address))
> + if (is_kfence_address(addr))
> return;
>
> - hw_set_mem_tag_range((void *)address,
> - round_up(size, KASAN_GRANULE_SIZE), value);
> + if (WARN_ON((u64)addr & KASAN_GRANULE_MASK))
> + return;
> + if (WARN_ON(size & KASAN_GRANULE_MASK))
> + return;
> +
> + hw_set_mem_tag_range((void *)addr, size, value);
> }
>
> -static inline void kasan_unpoison(const void *address, size_t size)
> +static inline void kasan_unpoison(const void *addr, size_t size)
> {
> - u8 tag = get_tag(address);
> + u8 tag = get_tag(addr);
>
> - address = kasan_reset_tag(address);
> + addr = kasan_reset_tag(addr);
>
> /* Skip KFENCE memory if called explicitly outside of sl*b. */
> - if (is_kfence_address(address))
> + if (is_kfence_address(addr))
> return;
>
> - hw_set_mem_tag_range((void *)address,
> - round_up(size, KASAN_GRANULE_SIZE), tag);
> + if (WARN_ON((u64)addr & KASAN_GRANULE_MASK))
> + return;
> + size = round_up(size, KASAN_GRANULE_SIZE);
> +
> + hw_set_mem_tag_range((void *)addr, size, tag);
> }
>
> static inline bool kasan_byte_accessible(const void *addr)
> @@ -361,7 +368,7 @@ static inline bool kasan_byte_accessible(const void *addr)
> /**
> * kasan_poison - mark the memory range as unaccessible
> * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
> - * @size - range size
> + * @size - range size, must be aligned to KASAN_GRANULE_SIZE
> * @value - value that's written to metadata for the range
> *
> * The size gets aligned to KASAN_GRANULE_SIZE before marking the range.
> @@ -371,7 +378,7 @@ void kasan_poison(const void *addr, size_t size, u8 value);
> /**
> * kasan_unpoison - mark the memory range as accessible
> * @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
> - * @size - range size
> + * @size - range size, can be unaligned
> *
> * For the tag-based modes, the @size gets aligned to KASAN_GRANULE_SIZE before
> * marking the range.
> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
> index 1ed7817e4ee6..c97f51c557ea 100644
> --- a/mm/kasan/shadow.c
> +++ b/mm/kasan/shadow.c
> @@ -69,7 +69,7 @@ void *memcpy(void *dest, const void *src, size_t len)
> return __memcpy(dest, src, len);
> }
>
> -void kasan_poison(const void *address, size_t size, u8 value)
> +void kasan_poison(const void *addr, size_t size, u8 value)
> {
> void *shadow_start, *shadow_end;
>
> @@ -78,55 +78,62 @@ void kasan_poison(const void *address, size_t size, u8 value)
> * some of the callers (e.g. kasan_poison_object_data) pass tagged
> * addresses to this function.
> */
> - address = kasan_reset_tag(address);
> + addr = kasan_reset_tag(addr);
>
> /* Skip KFENCE memory if called explicitly outside of sl*b. */
> - if (is_kfence_address(address))
> + if (is_kfence_address(addr))
> return;
>
> - size = round_up(size, KASAN_GRANULE_SIZE);
> - shadow_start = kasan_mem_to_shadow(address);
> - shadow_end = kasan_mem_to_shadow(address + size);
> + if (WARN_ON((u64)addr & KASAN_GRANULE_MASK))
> + return;
> + if (WARN_ON(size & KASAN_GRANULE_MASK))
> + return;
> +
> + shadow_start = kasan_mem_to_shadow(addr);
> + shadow_end = kasan_mem_to_shadow(addr + size);
>
> __memset(shadow_start, value, shadow_end - shadow_start);
> }
> EXPORT_SYMBOL(kasan_poison);
>
> #ifdef CONFIG_KASAN_GENERIC
> -void kasan_poison_last_granule(const void *address, size_t size)
> +void kasan_poison_last_granule(const void *addr, size_t size)
> {
> if (size & KASAN_GRANULE_MASK) {
> - u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
> + u8 *shadow = (u8 *)kasan_mem_to_shadow(addr + size);
> *shadow = size & KASAN_GRANULE_MASK;
> }
> }
> #endif
>
> -void kasan_unpoison(const void *address, size_t size)
> +void kasan_unpoison(const void *addr, size_t size)
> {
> - u8 tag = get_tag(address);
> + u8 tag = get_tag(addr);
>
> /*
> * Perform shadow offset calculation based on untagged address, as
> * some of the callers (e.g. kasan_unpoison_object_data) pass tagged
> * addresses to this function.
> */
> - address = kasan_reset_tag(address);
> + addr = kasan_reset_tag(addr);
>
> /*
> * Skip KFENCE memory if called explicitly outside of sl*b. Also note
> * that calls to ksize(), where size is not a multiple of machine-word
> * size, would otherwise poison the invalid portion of the word.
> */
> - if (is_kfence_address(address))
> + if (is_kfence_address(addr))
> + return;
> +
> + if (WARN_ON((u64)addr & KASAN_GRANULE_MASK))
> return;
>
> - /* Unpoison round_up(size, KASAN_GRANULE_SIZE) bytes. */
> - kasan_poison(address, size, tag);
> + /* Unpoison all granules that cover the object. */
> + kasan_poison(addr, round_up(size, KASAN_GRANULE_SIZE), tag);
>
> /* Partially poison the last granule for the generic mode. */
> if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> - kasan_poison_last_granule(address, size);
> + kasan_poison_last_granule(addr, size);
> }
>
> #ifdef CONFIG_MEMORY_HOTPLUG
> --
> 2.30.0.365.g02bc693789-goog
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 11/12] kasan: always inline HW_TAGS helper functions
2021-02-01 19:43 ` [PATCH 11/12] kasan: always inline HW_TAGS helper functions Andrey Konovalov
@ 2021-02-03 15:51 ` Marco Elver
0 siblings, 0 replies; 35+ messages in thread
From: Marco Elver @ 2021-02-03 15:51 UTC (permalink / raw)
To: Andrey Konovalov
Cc: linux-arm-kernel, Branislav Rankov, Catalin Marinas,
Kevin Brodsky, Will Deacon, linux-kernel, kasan-dev, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Andrew Morton, Vincenzo Frascino, Peter Collingbourne,
Dmitry Vyukov
On Mon, Feb 01, 2021 at 08:43PM +0100, Andrey Konovalov wrote:
> Mark all static functions in common.c and kasan.h that are used for
> hardware tag-based KASAN as __always_inline to avoid unnecessary
> function calls.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Does objtool complain about any of these?
I'm not sure this is unconditionally a good idea. If there isn't a
quantifiable performance bug or case where we cannot call a function,
perhaps we can just let the compiler decide?
More comments below.
> ---
> mm/kasan/common.c | 13 +++++++------
> mm/kasan/kasan.h | 6 +++---
> 2 files changed, 10 insertions(+), 9 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 5691cca69397..2004ecd6e43c 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -279,7 +279,8 @@ void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
> * based on objects indexes, so that objects that are next to each other
> * get different tags.
> */
> -static u8 assign_tag(struct kmem_cache *cache, const void *object, bool init)
> +static __always_inline u8 assign_tag(struct kmem_cache *cache,
> + const void *object, bool init)
This function might be small enough that it's fine.
> {
> if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> return 0xff;
> @@ -321,8 +322,8 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> return (void *)object;
> }
>
> -static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> - unsigned long ip, bool quarantine)
> +static __always_inline bool ____kasan_slab_free(struct kmem_cache *cache,
> + void *object, unsigned long ip, bool quarantine)
> {
Because ____kasan_slab_free() is tail-called by __kasan_slab_free() and
__kasan_slab_free_mempool(), there should never be a call (and if there
is we need to figure out why). The additional code-bloat and I-cache
pressure might be worse vs. just a jump. I'd let the compiler decide.
> u8 tag;
> void *tagged_object;
> @@ -366,7 +367,7 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> return ____kasan_slab_free(cache, object, ip, true);
> }
>
> -static bool ____kasan_kfree_large(void *ptr, unsigned long ip)
> +static __always_inline bool ____kasan_kfree_large(void *ptr, unsigned long ip)
> {
This one is tail-called by __kasan_kfree_large(). The usage in
__kasan_slab_free_mempool() is in an unlikely branch.
> if (ptr != page_address(virt_to_head_page(ptr))) {
> kasan_report_invalid_free(ptr, ip);
> @@ -461,8 +462,8 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
> return tagged_object;
> }
>
> -static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object,
> - size_t size, gfp_t flags)
> +static __always_inline void *____kasan_kmalloc(struct kmem_cache *cache,
> + const void *object, size_t size, gfp_t flags)
> {
Also only tail-called.
> unsigned long redzone_start;
> unsigned long redzone_end;
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 2f7400a3412f..d5fe72747a53 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -321,7 +321,7 @@ static inline u8 kasan_random_tag(void) { return 0; }
>
> #ifdef CONFIG_KASAN_HW_TAGS
>
> -static inline void kasan_poison(const void *addr, size_t size, u8 value)
> +static __always_inline void kasan_poison(const void *addr, size_t size, u8 value)
> {
> addr = kasan_reset_tag(addr);
>
> @@ -337,7 +337,7 @@ static inline void kasan_poison(const void *addr, size_t size, u8 value)
> hw_set_mem_tag_range((void *)addr, size, value);
> }
>
> -static inline void kasan_unpoison(const void *addr, size_t size)
> +static __always_inline void kasan_unpoison(const void *addr, size_t size)
> {
Not sure about these 2. They should be small, but it's hard to say what
is ideal on which architecture.
> u8 tag = get_tag(addr);
>
> @@ -354,7 +354,7 @@ static inline void kasan_unpoison(const void *addr, size_t size)
> hw_set_mem_tag_range((void *)addr, size, tag);
> }
>
> -static inline bool kasan_byte_accessible(const void *addr)
> +static __always_inline bool kasan_byte_accessible(const void *addr)
This function feels like a macro and if the compiler uninlined it, we
could argue it's a bug. But not sure if we need the __always_inline,
unless you've seen this uninlined.
> {
> u8 ptr_tag = get_tag(addr);
> u8 mem_tag = hw_get_mem_tag((void *)addr);
> --
> 2.30.0.365.g02bc693789-goog
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 10/12] arm64: kasan: simplify and inline MTE functions
2021-02-01 19:43 ` [PATCH 10/12] arm64: kasan: simplify and inline MTE functions Andrey Konovalov
2021-02-01 22:44 ` Andrew Morton
2021-02-02 15:42 ` Catalin Marinas
@ 2021-02-04 12:37 ` Vincenzo Frascino
2 siblings, 0 replies; 35+ messages in thread
From: Vincenzo Frascino @ 2021-02-04 12:37 UTC (permalink / raw)
To: Andrey Konovalov, Catalin Marinas, Dmitry Vyukov,
Alexander Potapenko, Marco Elver
Cc: Branislav Rankov, Kevin Brodsky, Will Deacon, linux-kernel,
kasan-dev, linux-mm, linux-arm-kernel, Andrey Ryabinin,
Andrew Morton, Peter Collingbourne, Evgenii Stepanov
On 2/1/21 7:43 PM, Andrey Konovalov wrote:
> This change provides a simpler implementation of mte_get_mem_tag(),
> mte_get_random_tag(), and mte_set_mem_tag_range().
>
> Simplifications include removing system_supports_mte() checks as these
> functions are onlye called from KASAN runtime that had already checked
> system_supports_mte(). Besides that, size and address alignment checks
> are removed from mte_set_mem_tag_range(), as KASAN now does those.
>
> This change also moves these functions into the asm/mte-kasan.h header
> and implements mte_set_mem_tag_range() via inline assembly to avoid
> unnecessary functions calls.
>
> Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
> arch/arm64/include/asm/cache.h | 1 -
> arch/arm64/include/asm/kasan.h | 1 +
> arch/arm64/include/asm/mte-def.h | 2 +
> arch/arm64/include/asm/mte-kasan.h | 64 ++++++++++++++++++++++++++----
> arch/arm64/include/asm/mte.h | 2 -
> arch/arm64/kernel/mte.c | 46 ---------------------
> arch/arm64/lib/mte.S | 16 --------
> 7 files changed, 60 insertions(+), 72 deletions(-)
>
> diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
> index 77cbbe3625f2..a074459f8f2f 100644
> --- a/arch/arm64/include/asm/cache.h
> +++ b/arch/arm64/include/asm/cache.h
> @@ -6,7 +6,6 @@
> #define __ASM_CACHE_H
>
> #include <asm/cputype.h>
> -#include <asm/mte-kasan.h>
>
> #define CTR_L1IP_SHIFT 14
> #define CTR_L1IP_MASK 3
> diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
> index 0aaf9044cd6a..12d5f47f7dbe 100644
> --- a/arch/arm64/include/asm/kasan.h
> +++ b/arch/arm64/include/asm/kasan.h
> @@ -6,6 +6,7 @@
>
> #include <linux/linkage.h>
> #include <asm/memory.h>
> +#include <asm/mte-kasan.h>
> #include <asm/pgtable-types.h>
>
> #define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
> diff --git a/arch/arm64/include/asm/mte-def.h b/arch/arm64/include/asm/mte-def.h
> index 2d73a1612f09..cf241b0f0a42 100644
> --- a/arch/arm64/include/asm/mte-def.h
> +++ b/arch/arm64/include/asm/mte-def.h
> @@ -11,4 +11,6 @@
> #define MTE_TAG_SIZE 4
> #define MTE_TAG_MASK GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE - 1)), MTE_TAG_SHIFT)
>
> +#define __MTE_PREAMBLE ARM64_ASM_PREAMBLE ".arch_extension memtag\n"
> +
> #endif /* __ASM_MTE_DEF_H */
> diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
> index 8ad981069afb..1f090beda7e6 100644
> --- a/arch/arm64/include/asm/mte-kasan.h
> +++ b/arch/arm64/include/asm/mte-kasan.h
> @@ -11,13 +11,16 @@
>
> #include <linux/types.h>
>
> +#ifdef CONFIG_ARM64_MTE
> +
> /*
> - * The functions below are meant to be used only for the
> - * KASAN_HW_TAGS interface defined in asm/memory.h.
> + * These functions are meant to be only used from KASAN runtime through
> + * the arch_*() interface defined in asm/memory.h.
> + * These functions don't include system_supports_mte() checks,
> + * as KASAN only calls them when MTE is supported and enabled.
> */
> -#ifdef CONFIG_ARM64_MTE
>
> -static inline u8 mte_get_ptr_tag(void *ptr)
> +static __always_inline u8 mte_get_ptr_tag(void *ptr)
> {
> /* Note: The format of KASAN tags is 0xF<x> */
> u8 tag = 0xF0 | (u8)(((u64)(ptr)) >> MTE_TAG_SHIFT);
> @@ -25,9 +28,54 @@ static inline u8 mte_get_ptr_tag(void *ptr)
> return tag;
> }
>
> -u8 mte_get_mem_tag(void *addr);
> -u8 mte_get_random_tag(void);
> -void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag);
> +/* Get allocation tag for the address. */
> +static __always_inline u8 mte_get_mem_tag(void *addr)
> +{
> + asm(__MTE_PREAMBLE "ldg %0, [%0]"
> + : "+r" (addr));
> +
> + return mte_get_ptr_tag(addr);
> +}
> +
> +/* Generate a random tag. */
> +static __always_inline u8 mte_get_random_tag(void)
> +{
> + void *addr;
> +
> + asm(__MTE_PREAMBLE "irg %0, %0"
> + : "+r" (addr));
> +
> + return mte_get_ptr_tag(addr);
> +}
> +
> +/*
> + * Assign allocation tags for a region of memory based on the pointer tag.
> + * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
> + * size must be non-zero and MTE_GRANULE_SIZE aligned.
> + */
> +static __always_inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
> +{
> + u64 curr, end;
> +
> + if (!size)
> + return;
> +
> + curr = (u64)__tag_set(addr, tag);
> + end = curr + size;
> +
> + do {
> + /*
> + * 'asm volatile' is required to prevent the compiler to move
> + * the statement outside of the loop.
> + */
> + asm volatile(__MTE_PREAMBLE "stg %0, [%0]"
> + :
> + : "r" (curr)
> + : "memory");
> +
> + curr += MTE_GRANULE_SIZE;
> + } while (curr != end);
> +}
>
> void mte_enable_kernel_sync(void);
> void mte_enable_kernel_async(void);
> @@ -47,10 +95,12 @@ static inline u8 mte_get_mem_tag(void *addr)
> {
> return 0xFF;
> }
> +
> static inline u8 mte_get_random_tag(void)
> {
> return 0xFF;
> }
> +
> static inline void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
> {
> return addr;
> diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
> index 237bb2f7309d..43169b978cd3 100644
> --- a/arch/arm64/include/asm/mte.h
> +++ b/arch/arm64/include/asm/mte.h
> @@ -8,8 +8,6 @@
> #include <asm/compiler.h>
> #include <asm/mte-def.h>
>
> -#define __MTE_PREAMBLE ARM64_ASM_PREAMBLE ".arch_extension memtag\n"
> -
> #ifndef __ASSEMBLY__
>
> #include <linux/bitfield.h>
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 7763ac1f2917..8b27b70e1aac 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -19,7 +19,6 @@
> #include <asm/barrier.h>
> #include <asm/cpufeature.h>
> #include <asm/mte.h>
> -#include <asm/mte-kasan.h>
> #include <asm/ptrace.h>
> #include <asm/sysreg.h>
>
> @@ -88,51 +87,6 @@ int memcmp_pages(struct page *page1, struct page *page2)
> return ret;
> }
>
> -u8 mte_get_mem_tag(void *addr)
> -{
> - if (!system_supports_mte())
> - return 0xFF;
> -
> - asm(__MTE_PREAMBLE "ldg %0, [%0]"
> - : "+r" (addr));
> -
> - return mte_get_ptr_tag(addr);
> -}
> -
> -u8 mte_get_random_tag(void)
> -{
> - void *addr;
> -
> - if (!system_supports_mte())
> - return 0xFF;
> -
> - asm(__MTE_PREAMBLE "irg %0, %0"
> - : "+r" (addr));
> -
> - return mte_get_ptr_tag(addr);
> -}
> -
> -void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
> -{
> - void *ptr = addr;
> -
> - if ((!system_supports_mte()) || (size == 0))
> - return addr;
> -
> - /* Make sure that size is MTE granule aligned. */
> - WARN_ON(size & (MTE_GRANULE_SIZE - 1));
> -
> - /* Make sure that the address is MTE granule aligned. */
> - WARN_ON((u64)addr & (MTE_GRANULE_SIZE - 1));
> -
> - tag = 0xF0 | tag;
> - ptr = (void *)__tag_set(ptr, tag);
> -
> - mte_assign_mem_tag_range(ptr, size);
> -
> - return ptr;
> -}
> -
> void mte_init_tags(u64 max_tag)
> {
> static bool gcr_kernel_excl_initialized;
> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
> index 9e1a12e10053..351537c12f36 100644
> --- a/arch/arm64/lib/mte.S
> +++ b/arch/arm64/lib/mte.S
> @@ -149,19 +149,3 @@ SYM_FUNC_START(mte_restore_page_tags)
>
> ret
> SYM_FUNC_END(mte_restore_page_tags)
> -
> -/*
> - * Assign allocation tags for a region of memory based on the pointer tag
> - * x0 - source pointer
> - * x1 - size
> - *
> - * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
> - * size must be non-zero and MTE_GRANULE_SIZE aligned.
> - */
> -SYM_FUNC_START(mte_assign_mem_tag_range)
> -1: stg x0, [x0]
> - add x0, x0, #MTE_GRANULE_SIZE
> - subs x1, x1, #MTE_GRANULE_SIZE
> - b.gt 1b
> - ret
> -SYM_FUNC_END(mte_assign_mem_tag_range)
>
--
Regards,
Vincenzo
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 10/12] arm64: kasan: simplify and inline MTE functions
2021-02-01 22:44 ` Andrew Morton
@ 2021-02-04 12:39 ` Vincenzo Frascino
0 siblings, 0 replies; 35+ messages in thread
From: Vincenzo Frascino @ 2021-02-04 12:39 UTC (permalink / raw)
To: Andrew Morton, Andrey Konovalov
Cc: linux-arm-kernel, Marco Elver, Catalin Marinas, Kevin Brodsky,
Will Deacon, Branislav Rankov, kasan-dev, linux-kernel, linux-mm,
Alexander Potapenko, Evgenii Stepanov, Andrey Ryabinin,
Peter Collingbourne, Dmitry Vyukov
Hi Andrew,
On 2/1/21 10:44 PM, Andrew Morton wrote:
> On Mon, 1 Feb 2021 20:43:34 +0100 Andrey Konovalov <andreyknvl@google.com> wrote:
>
>> This change provides a simpler implementation of mte_get_mem_tag(),
>> mte_get_random_tag(), and mte_set_mem_tag_range().
>>
>> Simplifications include removing system_supports_mte() checks as these
>> functions are onlye called from KASAN runtime that had already checked
>> system_supports_mte(). Besides that, size and address alignment checks
>> are removed from mte_set_mem_tag_range(), as KASAN now does those.
>>
>> This change also moves these functions into the asm/mte-kasan.h header
>> and implements mte_set_mem_tag_range() via inline assembly to avoid
>> unnecessary functions calls.
>>
>> Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
>> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
>
> Co-developed-by requires a Signed-off-by: as well. Vincenzo, please
> send us one?
>
>
I added my Signed-off-by to the patch.
--
Regards,
Vincenzo
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
end of thread, other threads:[~2021-02-04 12:36 UTC | newest]
Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-01 19:43 [PATCH 00/12] kasan: optimizations and fixes for HW_TAGS Andrey Konovalov
2021-02-01 19:43 ` [PATCH 01/12] kasan, mm: don't save alloc stacks twice Andrey Konovalov
2021-02-02 16:06 ` Marco Elver
2021-02-02 18:01 ` Andrey Konovalov
2021-02-02 18:40 ` Marco Elver
2021-02-01 19:43 ` [PATCH 02/12] kasan, mm: optimize kmalloc poisoning Andrey Konovalov
2021-02-02 16:25 ` Marco Elver
2021-02-02 17:15 ` Andrey Konovalov
2021-02-02 17:39 ` Marco Elver
2021-02-01 19:43 ` [PATCH 03/12] kasan: optimize large " Andrey Konovalov
2021-02-02 16:57 ` Marco Elver
2021-02-01 19:43 ` [PATCH 04/12] kasan: clean up setting free info in kasan_slab_free Andrey Konovalov
2021-02-02 17:03 ` Marco Elver
2021-02-01 19:43 ` [PATCH 05/12] kasan: unify large kfree checks Andrey Konovalov
2021-02-03 12:13 ` Marco Elver
2021-02-01 19:43 ` [PATCH 06/12] kasan: rework krealloc tests Andrey Konovalov
2021-02-03 14:48 ` Marco Elver
2021-02-01 19:43 ` [PATCH 07/12] kasan, mm: remove krealloc side-effect Andrey Konovalov
2021-02-03 15:10 ` Marco Elver
2021-02-01 19:43 ` [PATCH 08/12] kasan, mm: optimize krealloc poisoning Andrey Konovalov
2021-02-03 14:34 ` Marco Elver
2021-02-01 19:43 ` [PATCH 09/12] kasan: ensure poisoning size alignment Andrey Konovalov
2021-02-03 15:31 ` Marco Elver
2021-02-01 19:43 ` [PATCH 10/12] arm64: kasan: simplify and inline MTE functions Andrey Konovalov
2021-02-01 22:44 ` Andrew Morton
2021-02-04 12:39 ` Vincenzo Frascino
2021-02-02 15:42 ` Catalin Marinas
2021-02-02 18:04 ` Andrey Konovalov
2021-02-04 12:37 ` Vincenzo Frascino
2021-02-01 19:43 ` [PATCH 11/12] kasan: always inline HW_TAGS helper functions Andrey Konovalov
2021-02-03 15:51 ` Marco Elver
2021-02-01 19:43 ` [PATCH 12/12] arm64: kasan: export MTE symbols for KASAN tests Andrey Konovalov
2021-02-02 10:46 ` Will Deacon
2021-02-02 13:42 ` Andrey Konovalov
2021-02-02 15:43 ` Catalin Marinas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).