linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] kasan: more tag based mode fixes
@ 2019-02-11 21:59 Andrey Konovalov
  2019-02-11 21:59 ` [PATCH 1/5] kasan: fix assigning tags twice Andrey Konovalov
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: Andrey Konovalov @ 2019-02-11 21:59 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm, linux-kernel
  Cc: Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov,
	Andrey Konovalov

Andrey Konovalov (5):
  kasan: fix assigning tags twice
  kasan, kmemleak: pass tagged pointers to kmemleak
  kmemleak: account for tagged pointers when calculating pointer range
  kasan, slub: move kasan_poison_slab hook before page_address
  kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED

 mm/kasan/common.c | 29 +++++++++++++++++------------
 mm/kmemleak.c     | 10 +++++++---
 mm/slab.h         |  6 ++----
 mm/slab_common.c  |  2 +-
 mm/slub.c         | 32 +++++++++++++++-----------------
 5 files changed, 42 insertions(+), 37 deletions(-)

-- 
2.20.1.791.gb4d0f1c61a-goog


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/5] kasan: fix assigning tags twice
  2019-02-11 21:59 [PATCH 0/5] kasan: more tag based mode fixes Andrey Konovalov
@ 2019-02-11 21:59 ` Andrey Konovalov
  2019-02-11 21:59 ` [PATCH 2/5] kasan, kmemleak: pass tagged pointers to kmemleak Andrey Konovalov
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 17+ messages in thread
From: Andrey Konovalov @ 2019-02-11 21:59 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm, linux-kernel
  Cc: Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov,
	Andrey Konovalov

When an object is kmalloc()'ed, two hooks are called: kasan_slab_alloc()
and kasan_kmalloc(). Right now we assign a tag twice, once in each of
the hooks. Fix it by assigning a tag only in the former hook.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/common.c | 29 +++++++++++++++++------------
 1 file changed, 17 insertions(+), 12 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 73c9cbfdedf4..09b534fbba17 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -361,10 +361,15 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
  *    get different tags.
  */
 static u8 assign_tag(struct kmem_cache *cache, const void *object,
-			bool init, bool krealloc)
+			bool init, bool keep_tag)
 {
-	/* Reuse the same tag for krealloc'ed objects. */
-	if (krealloc)
+	/*
+	 * 1. When an object is kmalloc()'ed, two hooks are called:
+	 *    kasan_slab_alloc() and kasan_kmalloc(). We assign the
+	 *    tag only in the first one.
+	 * 2. We reuse the same tag for krealloc'ed objects.
+	 */
+	if (keep_tag)
 		return get_tag(object);
 
 	/*
@@ -405,12 +410,6 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
 	return (void *)object;
 }
 
-void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
-					gfp_t flags)
-{
-	return kasan_kmalloc(cache, object, cache->object_size, flags);
-}
-
 static inline bool shadow_invalid(u8 tag, s8 shadow_byte)
 {
 	if (IS_ENABLED(CONFIG_KASAN_GENERIC))
@@ -467,7 +466,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
 }
 
 static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
-				size_t size, gfp_t flags, bool krealloc)
+				size_t size, gfp_t flags, bool keep_tag)
 {
 	unsigned long redzone_start;
 	unsigned long redzone_end;
@@ -485,7 +484,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
 				KASAN_SHADOW_SCALE_SIZE);
 
 	if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
-		tag = assign_tag(cache, object, false, krealloc);
+		tag = assign_tag(cache, object, false, keep_tag);
 
 	/* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */
 	kasan_unpoison_shadow(set_tag(object, tag), size);
@@ -498,10 +497,16 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
 	return set_tag(object, tag);
 }
 
+void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
+					gfp_t flags)
+{
+	return __kasan_kmalloc(cache, object, cache->object_size, flags, false);
+}
+
 void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
 				size_t size, gfp_t flags)
 {
-	return __kasan_kmalloc(cache, object, size, flags, false);
+	return __kasan_kmalloc(cache, object, size, flags, true);
 }
 EXPORT_SYMBOL(kasan_kmalloc);
 
-- 
2.20.1.791.gb4d0f1c61a-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/5] kasan, kmemleak: pass tagged pointers to kmemleak
  2019-02-11 21:59 [PATCH 0/5] kasan: more tag based mode fixes Andrey Konovalov
  2019-02-11 21:59 ` [PATCH 1/5] kasan: fix assigning tags twice Andrey Konovalov
@ 2019-02-11 21:59 ` Andrey Konovalov
  2019-02-12 15:56   ` Vincenzo Frascino
  2019-02-11 21:59 ` [PATCH 3/5] kmemleak: account for tagged pointers when calculating pointer range Andrey Konovalov
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: Andrey Konovalov @ 2019-02-11 21:59 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm, linux-kernel
  Cc: Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov,
	Andrey Konovalov

Right now we call kmemleak hooks before assigning tags to pointers in
KASAN hooks. As a result, when an objects gets allocated, kmemleak sees
a differently tagged pointer, compared to the one it sees when the object
gets freed. Fix it by calling KASAN hooks before kmemleak's ones.

Reported-by: Qian Cai <cai@lca.pw>
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/slab.h        | 6 ++----
 mm/slab_common.c | 2 +-
 mm/slub.c        | 3 ++-
 3 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 4190c24ef0e9..638ea1b25d39 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -437,11 +437,9 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
 
 	flags &= gfp_allowed_mask;
 	for (i = 0; i < size; i++) {
-		void *object = p[i];
-
-		kmemleak_alloc_recursive(object, s->object_size, 1,
+		p[i] = kasan_slab_alloc(s, p[i], flags);
+		kmemleak_alloc_recursive(p[i], s->object_size, 1,
 					 s->flags, flags);
-		p[i] = kasan_slab_alloc(s, object, flags);
 	}
 
 	if (memcg_kmem_enabled())
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 81732d05e74a..fe524c8d0246 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1228,8 +1228,8 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	flags |= __GFP_COMP;
 	page = alloc_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
-	kmemleak_alloc(ret, size, 1, flags);
 	ret = kasan_kmalloc_large(ret, size, flags);
+	kmemleak_alloc(ret, size, 1, flags);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
diff --git a/mm/slub.c b/mm/slub.c
index 1e3d0ec4e200..4a3d7686902f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1374,8 +1374,9 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
  */
 static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
+	ptr = kasan_kmalloc_large(ptr, size, flags);
 	kmemleak_alloc(ptr, size, 1, flags);
-	return kasan_kmalloc_large(ptr, size, flags);
+	return ptr;
 }
 
 static __always_inline void kfree_hook(void *x)
-- 
2.20.1.791.gb4d0f1c61a-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/5] kmemleak: account for tagged pointers when calculating pointer range
  2019-02-11 21:59 [PATCH 0/5] kasan: more tag based mode fixes Andrey Konovalov
  2019-02-11 21:59 ` [PATCH 1/5] kasan: fix assigning tags twice Andrey Konovalov
  2019-02-11 21:59 ` [PATCH 2/5] kasan, kmemleak: pass tagged pointers to kmemleak Andrey Konovalov
@ 2019-02-11 21:59 ` Andrey Konovalov
  2019-02-15 14:05   ` Catalin Marinas
  2019-02-11 21:59 ` [PATCH 4/5] kasan, slub: move kasan_poison_slab hook before page_address Andrey Konovalov
  2019-02-11 21:59 ` [PATCH 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED Andrey Konovalov
  4 siblings, 1 reply; 17+ messages in thread
From: Andrey Konovalov @ 2019-02-11 21:59 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm, linux-kernel
  Cc: Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov,
	Andrey Konovalov

kmemleak keeps two global variables, min_addr and max_addr, which store
the range of valid (encountered by kmemleak) pointer values, which it
later uses to speed up pointer lookup when scanning blocks.

With tagged pointers this range will get bigger than it needs to be.
This patch makes kmemleak untag pointers before saving them to min_addr
and max_addr and when performing a lookup.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kmemleak.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index f9d9dc250428..707fa5579f66 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -574,6 +574,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
 	unsigned long flags;
 	struct kmemleak_object *object, *parent;
 	struct rb_node **link, *rb_parent;
+	unsigned long untagged_ptr;
 
 	object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
 	if (!object) {
@@ -619,8 +620,9 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
 
 	write_lock_irqsave(&kmemleak_lock, flags);
 
-	min_addr = min(min_addr, ptr);
-	max_addr = max(max_addr, ptr + size);
+	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
+	min_addr = min(min_addr, untagged_ptr);
+	max_addr = max(max_addr, untagged_ptr + size);
 	link = &object_tree_root.rb_node;
 	rb_parent = NULL;
 	while (*link) {
@@ -1333,6 +1335,7 @@ static void scan_block(void *_start, void *_end,
 	unsigned long *start = PTR_ALIGN(_start, BYTES_PER_POINTER);
 	unsigned long *end = _end - (BYTES_PER_POINTER - 1);
 	unsigned long flags;
+	unsigned long untagged_ptr;
 
 	read_lock_irqsave(&kmemleak_lock, flags);
 	for (ptr = start; ptr < end; ptr++) {
@@ -1347,7 +1350,8 @@ static void scan_block(void *_start, void *_end,
 		pointer = *ptr;
 		kasan_enable_current();
 
-		if (pointer < min_addr || pointer >= max_addr)
+		untagged_ptr = (unsigned long)kasan_reset_tag((void *)pointer);
+		if (untagged_ptr < min_addr || untagged_ptr >= max_addr)
 			continue;
 
 		/*
-- 
2.20.1.791.gb4d0f1c61a-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/5] kasan, slub: move kasan_poison_slab hook before page_address
  2019-02-11 21:59 [PATCH 0/5] kasan: more tag based mode fixes Andrey Konovalov
                   ` (2 preceding siblings ...)
  2019-02-11 21:59 ` [PATCH 3/5] kmemleak: account for tagged pointers when calculating pointer range Andrey Konovalov
@ 2019-02-11 21:59 ` Andrey Konovalov
  2019-02-12 21:12   ` Andrew Morton
  2019-02-11 21:59 ` [PATCH 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED Andrey Konovalov
  4 siblings, 1 reply; 17+ messages in thread
From: Andrey Konovalov @ 2019-02-11 21:59 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm, linux-kernel
  Cc: Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov,
	Andrey Konovalov

With tag based KASAN page_address() looks at the page flags to see
whether the resulting pointer needs to have a tag set. Since we don't
want to set a tag when page_address() is called on SLAB pages, we call
page_kasan_tag_reset() in kasan_poison_slab(). However in allocate_slab()
page_address() is called before kasan_poison_slab(). Fix it by changing
the order.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/slub.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 4a3d7686902f..ce874a5c9ee7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1642,12 +1642,15 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (page_is_pfmemalloc(page))
 		SetPageSlabPfmemalloc(page);
 
+	kasan_poison_slab(page);
+
 	start = page_address(page);
 
-	if (unlikely(s->flags & SLAB_POISON))
+	if (unlikely(s->flags & SLAB_POISON)) {
+		metadata_access_enable();
 		memset(start, POISON_INUSE, PAGE_SIZE << order);
-
-	kasan_poison_slab(page);
+		metadata_access_disable();
+	}
 
 	shuffle = shuffle_freelist(s, page);
 
-- 
2.20.1.791.gb4d0f1c61a-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED
  2019-02-11 21:59 [PATCH 0/5] kasan: more tag based mode fixes Andrey Konovalov
                   ` (3 preceding siblings ...)
  2019-02-11 21:59 ` [PATCH 4/5] kasan, slub: move kasan_poison_slab hook before page_address Andrey Konovalov
@ 2019-02-11 21:59 ` Andrey Konovalov
  2019-02-12  2:43   ` Qian Cai
  4 siblings, 1 reply; 17+ messages in thread
From: Andrey Konovalov @ 2019-02-11 21:59 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm, linux-kernel
  Cc: Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov,
	Andrey Konovalov

CONFIG_SLAB_FREELIST_HARDENED hashes freelist pointer with the address
of the object where the pointer gets stored. With tag based KASAN we don't
account for that when building freelist, as we call set_freepointer() with
the first argument untagged. This patch changes the code to properly
propagate tags throughout the loop.

Reported-by: Qian Cai <cai@lca.pw>
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/slub.c | 20 +++++++-------------
 1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index ce874a5c9ee7..0d32f8d30752 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -303,11 +303,6 @@ static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp)
 		__p < (__addr) + (__objects) * (__s)->size; \
 		__p += (__s)->size)
 
-#define for_each_object_idx(__p, __idx, __s, __addr, __objects) \
-	for (__p = fixup_red_left(__s, __addr), __idx = 1; \
-		__idx <= __objects; \
-		__p += (__s)->size, __idx++)
-
 /* Determine object index from a given position */
 static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr)
 {
@@ -1655,17 +1650,16 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	shuffle = shuffle_freelist(s, page);
 
 	if (!shuffle) {
-		for_each_object_idx(p, idx, s, start, page->objects) {
-			if (likely(idx < page->objects)) {
-				next = p + s->size;
-				next = setup_object(s, page, next);
-				set_freepointer(s, p, next);
-			} else
-				set_freepointer(s, p, NULL);
-		}
 		start = fixup_red_left(s, start);
 		start = setup_object(s, page, start);
 		page->freelist = start;
+		for (idx = 0, p = start; idx < page->objects - 1; idx++) {
+			next = p + s->size;
+			next = setup_object(s, page, next);
+			set_freepointer(s, p, next);
+			p = next;
+		}
+		set_freepointer(s, p, NULL);
 	}
 
 	page->inuse = page->objects;
-- 
2.20.1.791.gb4d0f1c61a-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED
  2019-02-11 21:59 ` [PATCH 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED Andrey Konovalov
@ 2019-02-12  2:43   ` Qian Cai
  2019-02-12 13:26     ` Andrey Konovalov
  0 siblings, 1 reply; 17+ messages in thread
From: Qian Cai @ 2019-02-12  2:43 UTC (permalink / raw)
  To: Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm,
	linux-kernel
  Cc: Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov



On 2/11/19 4:59 PM, Andrey Konovalov wrote:
> CONFIG_SLAB_FREELIST_HARDENED hashes freelist pointer with the address
> of the object where the pointer gets stored. With tag based KASAN we don't
> account for that when building freelist, as we call set_freepointer() with
> the first argument untagged. This patch changes the code to properly
> propagate tags throughout the loop.
> 
> Reported-by: Qian Cai <cai@lca.pw>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  mm/slub.c | 20 +++++++-------------
>  1 file changed, 7 insertions(+), 13 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index ce874a5c9ee7..0d32f8d30752 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -303,11 +303,6 @@ static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp)
>  		__p < (__addr) + (__objects) * (__s)->size; \
>  		__p += (__s)->size)
>  
> -#define for_each_object_idx(__p, __idx, __s, __addr, __objects) \
> -	for (__p = fixup_red_left(__s, __addr), __idx = 1; \
> -		__idx <= __objects; \
> -		__p += (__s)->size, __idx++)
> -
>  /* Determine object index from a given position */
>  static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr)
>  {
> @@ -1655,17 +1650,16 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>  	shuffle = shuffle_freelist(s, page);
>  
>  	if (!shuffle) {
> -		for_each_object_idx(p, idx, s, start, page->objects) {
> -			if (likely(idx < page->objects)) {
> -				next = p + s->size;
> -				next = setup_object(s, page, next);
> -				set_freepointer(s, p, next);
> -			} else
> -				set_freepointer(s, p, NULL);
> -		}
>  		start = fixup_red_left(s, start);
>  		start = setup_object(s, page, start);
>  		page->freelist = start;
> +		for (idx = 0, p = start; idx < page->objects - 1; idx++) {
> +			next = p + s->size;
> +			next = setup_object(s, page, next);
> +			set_freepointer(s, p, next);
> +			p = next;
> +		}
> +		set_freepointer(s, p, NULL);
>  	}
>  
>  	page->inuse = page->objects;
> 

Well, this one patch does not work here, as it throws endless errors below
during boot. Still need this patch to fix it.

https://marc.info/?l=linux-mm&m=154955366113951&w=2

[   85.744772] BUG kmemleak_object (Tainted: G    B        L   ): Freepointer
corrupt
[   85.744776]
-----------------------------------------------------------------------------
[   85.744776]
[   85.744788] INFO: Allocated in create_object+0x88/0x9c8 age=2564 cpu=153 pid=1
[   85.744797] 	kmem_cache_alloc+0x39c/0x4ec
[   85.744803] 	create_object+0x88/0x9c8
[   85.744811] 	kmemleak_alloc+0xbc/0x180
[   85.744818] 	kmem_cache_alloc+0x3ec/0x4ec
[   85.744825] 	acpi_ut_create_generic_state+0x64/0xc4
[   85.744832] 	acpi_ut_create_pkg_state+0x24/0x1c8
[   85.744840] 	acpi_ut_walk_package_tree+0x268/0x564
[   85.744848] 	acpi_ns_init_one_package+0x80/0x114
[   85.744856] 	acpi_ns_init_one_object+0x214/0x3d8
[   85.744862] 	acpi_ns_walk_namespace+0x288/0x384
[   85.744869] 	acpi_walk_namespace+0xac/0xe8
[   85.744877] 	acpi_ns_initialize_objects+0x50/0x98
[   85.744883] 	acpi_load_tables+0xac/0x120
[   85.744891] 	acpi_init+0x128/0x850
[   85.744898] 	do_one_initcall+0x3ac/0x8c0
[   85.744906] 	kernel_init_freeable+0xcdc/0x1104
[   85.744916] INFO: Freed in free_object_rcu+0x200/0x228 age=3 cpu=153 pid=0
[   85.744923] 	free_object_rcu+0x200/0x228
[   85.744931] 	rcu_process_callbacks+0xb00/0x12c0
[   85.744937] 	__do_softirq+0x644/0xfd0
[   85.744944] 	irq_exit+0x29c/0x370
[   85.744952] 	__handle_domain_irq+0xe0/0x1c4
[   85.744958] 	gic_handle_irq+0x1c4/0x3b0
[   85.744964] 	el1_irq+0xb0/0x140
[   85.744971] 	arch_cpu_idle+0x26c/0x594
[   85.744978] 	default_idle_call+0x44/0x5c
[   85.744985] 	do_idle+0x180/0x260
[   85.744993] 	cpu_startup_entry+0x24/0x28
[   85.745001] 	secondary_start_kernel+0x36c/0x440
[   85.745009] INFO: Slab 0x(____ptrval____) objects=91 used=0
fp=0x(____ptrval____) flags=0x17ffffffc000200
[   85.745015] INFO: Object 0x(____ptrval____) @offset=35296 fp=0x(____ptrval____)

kkkkk4.226750] Redzone (____ptrval____): bb bb bb bb bb bb bb bb bb bb bb bb bb
bb bb bb  ................
[   84.22[   84.226765] ORedzone (____ptrptrval____): 5a worker/223:0 Tainted: G
   B        L    5.0.0-rc6+ #36
[   84.226790] Hardware name: HPE Apollo 70             /C01_APACHE_MB         ,
BIOS L50_5.13_1.0.6 07/10/2018
[   84.226798] Workqueue: events free_obj_work
[   84.226802] Call trace:
[   84.226809]  dump_backtrace+0x0/0x450
[   84.226815]  show_stack+0x20/0x2c
[   84.226822]  __dump_stack+0x20/0x28
[   84.226828]  dump_stack+0xa0/0xfc
[   84.226835]  print_trailer+0x1a8/0x1bc
[   84.226842]  object_err+0x40/0x50
[   84.226848]  check_object+0x214/0x2b8
[   84.226854]  __free_slab+0x9c/0x31c
[   84.226860]  discard_slab+0x78/0xa8
[   84.226866]  kmem_cache_free+0x99c/0x9f0
[   84.226873]  free_obj_work+0x92c/0xa44
[   84.226879]  process_one_work+0x894/0x1280
[   84.226885]  worker_thread+0x684/0xa1c
[   84.226892]  kthread+0x2cc/0x2e8
[   84.226898]  ret_from_fork+0x10/0x18
[   84.229197]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED
  2019-02-12  2:43   ` Qian Cai
@ 2019-02-12 13:26     ` Andrey Konovalov
  2019-02-12 13:43       ` Qian Cai
  0 siblings, 1 reply; 17+ messages in thread
From: Andrey Konovalov @ 2019-02-12 13:26 UTC (permalink / raw)
  To: Qian Cai
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev,
	Linux Memory Management List, LKML, Vincenzo Frascino,
	Kostya Serebryany, Evgeniy Stepanov

On Tue, Feb 12, 2019 at 3:43 AM Qian Cai <cai@lca.pw> wrote:
>
>
>
> On 2/11/19 4:59 PM, Andrey Konovalov wrote:
> > CONFIG_SLAB_FREELIST_HARDENED hashes freelist pointer with the address
> > of the object where the pointer gets stored. With tag based KASAN we don't
> > account for that when building freelist, as we call set_freepointer() with
> > the first argument untagged. This patch changes the code to properly
> > propagate tags throughout the loop.
> >
> > Reported-by: Qian Cai <cai@lca.pw>
> > Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> > ---
> >  mm/slub.c | 20 +++++++-------------
> >  1 file changed, 7 insertions(+), 13 deletions(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index ce874a5c9ee7..0d32f8d30752 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -303,11 +303,6 @@ static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp)
> >               __p < (__addr) + (__objects) * (__s)->size; \
> >               __p += (__s)->size)
> >
> > -#define for_each_object_idx(__p, __idx, __s, __addr, __objects) \
> > -     for (__p = fixup_red_left(__s, __addr), __idx = 1; \
> > -             __idx <= __objects; \
> > -             __p += (__s)->size, __idx++)
> > -
> >  /* Determine object index from a given position */
> >  static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr)
> >  {
> > @@ -1655,17 +1650,16 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
> >       shuffle = shuffle_freelist(s, page);
> >
> >       if (!shuffle) {
> > -             for_each_object_idx(p, idx, s, start, page->objects) {
> > -                     if (likely(idx < page->objects)) {
> > -                             next = p + s->size;
> > -                             next = setup_object(s, page, next);
> > -                             set_freepointer(s, p, next);
> > -                     } else
> > -                             set_freepointer(s, p, NULL);
> > -             }
> >               start = fixup_red_left(s, start);
> >               start = setup_object(s, page, start);
> >               page->freelist = start;
> > +             for (idx = 0, p = start; idx < page->objects - 1; idx++) {
> > +                     next = p + s->size;
> > +                     next = setup_object(s, page, next);
> > +                     set_freepointer(s, p, next);
> > +                     p = next;
> > +             }
> > +             set_freepointer(s, p, NULL);
> >       }
> >
> >       page->inuse = page->objects;
> >
>
> Well, this one patch does not work here, as it throws endless errors below
> during boot. Still need this patch to fix it.

Hm, did you apply all 6 patches (the one that you sent and these five)?

>
> https://marc.info/?l=linux-mm&m=154955366113951&w=2
>
> [   85.744772] BUG kmemleak_object (Tainted: G    B        L   ): Freepointer
> corrupt
> [   85.744776]
> -----------------------------------------------------------------------------
> [   85.744776]
> [   85.744788] INFO: Allocated in create_object+0x88/0x9c8 age=2564 cpu=153 pid=1
> [   85.744797]  kmem_cache_alloc+0x39c/0x4ec
> [   85.744803]  create_object+0x88/0x9c8
> [   85.744811]  kmemleak_alloc+0xbc/0x180
> [   85.744818]  kmem_cache_alloc+0x3ec/0x4ec
> [   85.744825]  acpi_ut_create_generic_state+0x64/0xc4
> [   85.744832]  acpi_ut_create_pkg_state+0x24/0x1c8
> [   85.744840]  acpi_ut_walk_package_tree+0x268/0x564
> [   85.744848]  acpi_ns_init_one_package+0x80/0x114
> [   85.744856]  acpi_ns_init_one_object+0x214/0x3d8
> [   85.744862]  acpi_ns_walk_namespace+0x288/0x384
> [   85.744869]  acpi_walk_namespace+0xac/0xe8
> [   85.744877]  acpi_ns_initialize_objects+0x50/0x98
> [   85.744883]  acpi_load_tables+0xac/0x120
> [   85.744891]  acpi_init+0x128/0x850
> [   85.744898]  do_one_initcall+0x3ac/0x8c0
> [   85.744906]  kernel_init_freeable+0xcdc/0x1104
> [   85.744916] INFO: Freed in free_object_rcu+0x200/0x228 age=3 cpu=153 pid=0
> [   85.744923]  free_object_rcu+0x200/0x228
> [   85.744931]  rcu_process_callbacks+0xb00/0x12c0
> [   85.744937]  __do_softirq+0x644/0xfd0
> [   85.744944]  irq_exit+0x29c/0x370
> [   85.744952]  __handle_domain_irq+0xe0/0x1c4
> [   85.744958]  gic_handle_irq+0x1c4/0x3b0
> [   85.744964]  el1_irq+0xb0/0x140
> [   85.744971]  arch_cpu_idle+0x26c/0x594
> [   85.744978]  default_idle_call+0x44/0x5c
> [   85.744985]  do_idle+0x180/0x260
> [   85.744993]  cpu_startup_entry+0x24/0x28
> [   85.745001]  secondary_start_kernel+0x36c/0x440
> [   85.745009] INFO: Slab 0x(____ptrval____) objects=91 used=0
> fp=0x(____ptrval____) flags=0x17ffffffc000200
> [   85.745015] INFO: Object 0x(____ptrval____) @offset=35296 fp=0x(____ptrval____)
>
> kkkkk4.226750] Redzone (____ptrval____): bb bb bb bb bb bb bb bb bb bb bb bb bb
> bb bb bb  ................
> [   84.22[   84.226765] ORedzone (____ptrptrval____): 5a worker/223:0 Tainted: G
>    B        L    5.0.0-rc6+ #36
> [   84.226790] Hardware name: HPE Apollo 70             /C01_APACHE_MB         ,
> BIOS L50_5.13_1.0.6 07/10/2018
> [   84.226798] Workqueue: events free_obj_work
> [   84.226802] Call trace:
> [   84.226809]  dump_backtrace+0x0/0x450
> [   84.226815]  show_stack+0x20/0x2c
> [   84.226822]  __dump_stack+0x20/0x28
> [   84.226828]  dump_stack+0xa0/0xfc
> [   84.226835]  print_trailer+0x1a8/0x1bc
> [   84.226842]  object_err+0x40/0x50
> [   84.226848]  check_object+0x214/0x2b8
> [   84.226854]  __free_slab+0x9c/0x31c
> [   84.226860]  discard_slab+0x78/0xa8
> [   84.226866]  kmem_cache_free+0x99c/0x9f0
> [   84.226873]  free_obj_work+0x92c/0xa44
> [   84.226879]  process_one_work+0x894/0x1280
> [   84.226885]  worker_thread+0x684/0xa1c
> [   84.226892]  kthread+0x2cc/0x2e8
> [   84.226898]  ret_from_fork+0x10/0x18
> [   84.229197]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED
  2019-02-12 13:26     ` Andrey Konovalov
@ 2019-02-12 13:43       ` Qian Cai
  2019-02-12 14:42         ` Andrey Konovalov
  0 siblings, 1 reply; 17+ messages in thread
From: Qian Cai @ 2019-02-12 13:43 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev,
	Linux Memory Management List, LKML, Vincenzo Frascino,
	Kostya Serebryany, Evgeniy Stepanov



On 2/12/19 8:26 AM, Andrey Konovalov wrote:
> Hm, did you apply all 6 patches (the one that you sent and these five)
Yes.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED
  2019-02-12 13:43       ` Qian Cai
@ 2019-02-12 14:42         ` Andrey Konovalov
  2019-02-12 16:07           ` Qian Cai
  0 siblings, 1 reply; 17+ messages in thread
From: Andrey Konovalov @ 2019-02-12 14:42 UTC (permalink / raw)
  To: Qian Cai
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev,
	Linux Memory Management List, LKML, Vincenzo Frascino,
	Kostya Serebryany, Evgeniy Stepanov

On Tue, Feb 12, 2019 at 2:43 PM Qian Cai <cai@lca.pw> wrote:
>
>
>
> On 2/12/19 8:26 AM, Andrey Konovalov wrote:
> > Hm, did you apply all 6 patches (the one that you sent and these five)
> Yes.

I'm failing to reproduce this in QEMU. You're still using the same
config, right? Could you share whole dmesg until the first BUG?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/5] kasan, kmemleak: pass tagged pointers to kmemleak
  2019-02-11 21:59 ` [PATCH 2/5] kasan, kmemleak: pass tagged pointers to kmemleak Andrey Konovalov
@ 2019-02-12 15:56   ` Vincenzo Frascino
  2019-02-13 13:07     ` Andrey Konovalov
  0 siblings, 1 reply; 17+ messages in thread
From: Vincenzo Frascino @ 2019-02-12 15:56 UTC (permalink / raw)
  To: Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm,
	linux-kernel
  Cc: Qian Cai, Kostya Serebryany, Evgeniy Stepanov

On 11/02/2019 21:59, Andrey Konovalov wrote:
> Right now we call kmemleak hooks before assigning tags to pointers in
> KASAN hooks. As a result, when an objects gets allocated, kmemleak sees
> a differently tagged pointer, compared to the one it sees when the object
> gets freed. Fix it by calling KASAN hooks before kmemleak's ones.
>

Nit: Could you please add comments to the the code? It should prevent that the
code gets refactored in future, reintroducing the same issue.

> Reported-by: Qian Cai <cai@lca.pw>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  mm/slab.h        | 6 ++----
>  mm/slab_common.c | 2 +-
>  mm/slub.c        | 3 ++-
>  3 files changed, 5 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/slab.h b/mm/slab.h
> index 4190c24ef0e9..638ea1b25d39 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -437,11 +437,9 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
>  
>  	flags &= gfp_allowed_mask;
>  	for (i = 0; i < size; i++) {
> -		void *object = p[i];
> -
> -		kmemleak_alloc_recursive(object, s->object_size, 1,
> +		p[i] = kasan_slab_alloc(s, p[i], flags);
> +		kmemleak_alloc_recursive(p[i], s->object_size, 1,
>  					 s->flags, flags);
> -		p[i] = kasan_slab_alloc(s, object, flags);
>  	}
>  
>  	if (memcg_kmem_enabled())
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 81732d05e74a..fe524c8d0246 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -1228,8 +1228,8 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
>  	flags |= __GFP_COMP;
>  	page = alloc_pages(flags, order);
>  	ret = page ? page_address(page) : NULL;
> -	kmemleak_alloc(ret, size, 1, flags);
>  	ret = kasan_kmalloc_large(ret, size, flags);
> +	kmemleak_alloc(ret, size, 1, flags);
>  	return ret;
>  }
>  EXPORT_SYMBOL(kmalloc_order);
> diff --git a/mm/slub.c b/mm/slub.c
> index 1e3d0ec4e200..4a3d7686902f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1374,8 +1374,9 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
>   */
>  static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
>  {
> +	ptr = kasan_kmalloc_large(ptr, size, flags);
>  	kmemleak_alloc(ptr, size, 1, flags);
> -	return kasan_kmalloc_large(ptr, size, flags);
> +	return ptr;
>  }
>  
>  static __always_inline void kfree_hook(void *x)
> 

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED
  2019-02-12 14:42         ` Andrey Konovalov
@ 2019-02-12 16:07           ` Qian Cai
  2019-02-13  2:15             ` Qian Cai
  0 siblings, 1 reply; 17+ messages in thread
From: Qian Cai @ 2019-02-12 16:07 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev,
	Linux Memory Management List, LKML, Vincenzo Frascino,
	Kostya Serebryany, Evgeniy Stepanov



On 2/12/19 9:42 AM, Andrey Konovalov wrote:
> On Tue, Feb 12, 2019 at 2:43 PM Qian Cai <cai@lca.pw> wrote:
>>
>>
>>
>> On 2/12/19 8:26 AM, Andrey Konovalov wrote:
>>> Hm, did you apply all 6 patches (the one that you sent and these five)
>> Yes.
> 
> I'm failing to reproduce this in QEMU. You're still using the same
> config, right? Could you share whole dmesg until the first BUG?
> 

Yes, same config and,

https://git.sr.ht/~cai/linux-debug/tree/master/dmesg

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/5] kasan, slub: move kasan_poison_slab hook before page_address
  2019-02-11 21:59 ` [PATCH 4/5] kasan, slub: move kasan_poison_slab hook before page_address Andrey Konovalov
@ 2019-02-12 21:12   ` Andrew Morton
  2019-02-13 13:25     ` Andrey Konovalov
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Morton @ 2019-02-12 21:12 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, kasan-dev, linux-mm, linux-kernel, Qian Cai,
	Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov

On Mon, 11 Feb 2019 22:59:53 +0100 Andrey Konovalov <andreyknvl@google.com> wrote:

> With tag based KASAN page_address() looks at the page flags to see
> whether the resulting pointer needs to have a tag set. Since we don't
> want to set a tag when page_address() is called on SLAB pages, we call
> page_kasan_tag_reset() in kasan_poison_slab(). However in allocate_slab()
> page_address() is called before kasan_poison_slab(). Fix it by changing
> the order.
> 
> ...
>
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1642,12 +1642,15 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>  	if (page_is_pfmemalloc(page))
>  		SetPageSlabPfmemalloc(page);
>  
> +	kasan_poison_slab(page);
> +
>  	start = page_address(page);
>  
> -	if (unlikely(s->flags & SLAB_POISON))
> +	if (unlikely(s->flags & SLAB_POISON)) {
> +		metadata_access_enable();
>  		memset(start, POISON_INUSE, PAGE_SIZE << order);
> -
> -	kasan_poison_slab(page);
> +		metadata_access_disable();
> +	}
>  
>  	shuffle = shuffle_freelist(s, page);

This doesn't compile when CONFIG_SLUB_DEBUG=n.  Please review carefully:

--- a/mm/slub.c~kasan-slub-move-kasan_poison_slab-hook-before-page_address-fix
+++ a/mm/slub.c
@@ -1357,6 +1357,14 @@ slab_flags_t kmem_cache_flags(unsigned i
 
 #define disable_higher_order_debug 0
 
+static inline void metadata_access_enable(void)
+{
+}
+
+static inline void metadata_access_disable(void)
+{
+}
+
 static inline unsigned long slabs_node(struct kmem_cache *s, int node)
 							{ return 0; }
 static inline unsigned long node_nr_slabs(struct kmem_cache_node *n)
_


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED
  2019-02-12 16:07           ` Qian Cai
@ 2019-02-13  2:15             ` Qian Cai
  0 siblings, 0 replies; 17+ messages in thread
From: Qian Cai @ 2019-02-13  2:15 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev,
	Linux Memory Management List, LKML, Vincenzo Frascino,
	Kostya Serebryany, Evgeniy Stepanov



On 2/12/19 11:07 AM, Qian Cai wrote:
> https://git.sr.ht/~cai/linux-debug/tree/master/dmesg
> 

FYI, I just send a patch to take care of this.
https://marc.info/?l=linux-mm&m=155002356527913&w=2

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/5] kasan, kmemleak: pass tagged pointers to kmemleak
  2019-02-12 15:56   ` Vincenzo Frascino
@ 2019-02-13 13:07     ` Andrey Konovalov
  0 siblings, 0 replies; 17+ messages in thread
From: Andrey Konovalov @ 2019-02-13 13:07 UTC (permalink / raw)
  To: Vincenzo Frascino
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev,
	Linux Memory Management List, LKML, Qian Cai, Kostya Serebryany,
	Evgeniy Stepanov

On Tue, Feb 12, 2019 at 4:57 PM Vincenzo Frascino
<vincenzo.frascino@arm.com> wrote:
>
> On 11/02/2019 21:59, Andrey Konovalov wrote:
> > Right now we call kmemleak hooks before assigning tags to pointers in
> > KASAN hooks. As a result, when an objects gets allocated, kmemleak sees
> > a differently tagged pointer, compared to the one it sees when the object
> > gets freed. Fix it by calling KASAN hooks before kmemleak's ones.
> >
>
> Nit: Could you please add comments to the the code? It should prevent that the
> code gets refactored in future, reintroducing the same issue.

Sure, I'll send v2 with comments, thanks!

>
> > Reported-by: Qian Cai <cai@lca.pw>
> > Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> > ---
> >  mm/slab.h        | 6 ++----
> >  mm/slab_common.c | 2 +-
> >  mm/slub.c        | 3 ++-
> >  3 files changed, 5 insertions(+), 6 deletions(-)
> >
> > diff --git a/mm/slab.h b/mm/slab.h
> > index 4190c24ef0e9..638ea1b25d39 100644
> > --- a/mm/slab.h
> > +++ b/mm/slab.h
> > @@ -437,11 +437,9 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
> >
> >       flags &= gfp_allowed_mask;
> >       for (i = 0; i < size; i++) {
> > -             void *object = p[i];
> > -
> > -             kmemleak_alloc_recursive(object, s->object_size, 1,
> > +             p[i] = kasan_slab_alloc(s, p[i], flags);
> > +             kmemleak_alloc_recursive(p[i], s->object_size, 1,
> >                                        s->flags, flags);
> > -             p[i] = kasan_slab_alloc(s, object, flags);
> >       }
> >
> >       if (memcg_kmem_enabled())
> > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > index 81732d05e74a..fe524c8d0246 100644
> > --- a/mm/slab_common.c
> > +++ b/mm/slab_common.c
> > @@ -1228,8 +1228,8 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
> >       flags |= __GFP_COMP;
> >       page = alloc_pages(flags, order);
> >       ret = page ? page_address(page) : NULL;
> > -     kmemleak_alloc(ret, size, 1, flags);
> >       ret = kasan_kmalloc_large(ret, size, flags);
> > +     kmemleak_alloc(ret, size, 1, flags);
> >       return ret;
> >  }
> >  EXPORT_SYMBOL(kmalloc_order);
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 1e3d0ec4e200..4a3d7686902f 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -1374,8 +1374,9 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
> >   */
> >  static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
> >  {
> > +     ptr = kasan_kmalloc_large(ptr, size, flags);
> >       kmemleak_alloc(ptr, size, 1, flags);
> > -     return kasan_kmalloc_large(ptr, size, flags);
> > +     return ptr;
> >  }
> >
> >  static __always_inline void kfree_hook(void *x)
> >
>
> --
> Regards,
> Vincenzo
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/f57831be-c57a-4a9e-992e-1f193866467b%40arm.com.
> For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/5] kasan, slub: move kasan_poison_slab hook before page_address
  2019-02-12 21:12   ` Andrew Morton
@ 2019-02-13 13:25     ` Andrey Konovalov
  0 siblings, 0 replies; 17+ messages in thread
From: Andrey Konovalov @ 2019-02-13 13:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, kasan-dev, Linux Memory Management List, LKML,
	Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov

On Tue, Feb 12, 2019 at 10:12 PM Andrew Morton
<akpm@linux-foundation.org> wrote:
>
> On Mon, 11 Feb 2019 22:59:53 +0100 Andrey Konovalov <andreyknvl@google.com> wrote:
>
> > With tag based KASAN page_address() looks at the page flags to see
> > whether the resulting pointer needs to have a tag set. Since we don't
> > want to set a tag when page_address() is called on SLAB pages, we call
> > page_kasan_tag_reset() in kasan_poison_slab(). However in allocate_slab()
> > page_address() is called before kasan_poison_slab(). Fix it by changing
> > the order.
> >
> > ...
> >
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -1642,12 +1642,15 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
> >       if (page_is_pfmemalloc(page))
> >               SetPageSlabPfmemalloc(page);
> >
> > +     kasan_poison_slab(page);
> > +
> >       start = page_address(page);
> >
> > -     if (unlikely(s->flags & SLAB_POISON))
> > +     if (unlikely(s->flags & SLAB_POISON)) {
> > +             metadata_access_enable();
> >               memset(start, POISON_INUSE, PAGE_SIZE << order);
> > -
> > -     kasan_poison_slab(page);
> > +             metadata_access_disable();
> > +     }
> >
> >       shuffle = shuffle_freelist(s, page);
>
> This doesn't compile when CONFIG_SLUB_DEBUG=n.  Please review carefully:

Sorry, missed this. I think it makes more sense to move this memset
into another function CONFIG_SLUB_DEBUG ifdef, since all other
poisoning code is also there. I'll send a v2.

>
> --- a/mm/slub.c~kasan-slub-move-kasan_poison_slab-hook-before-page_address-fix
> +++ a/mm/slub.c
> @@ -1357,6 +1357,14 @@ slab_flags_t kmem_cache_flags(unsigned i
>
>  #define disable_higher_order_debug 0
>
> +static inline void metadata_access_enable(void)
> +{
> +}
> +
> +static inline void metadata_access_disable(void)
> +{
> +}
> +
>  static inline unsigned long slabs_node(struct kmem_cache *s, int node)
>                                                         { return 0; }
>  static inline unsigned long node_nr_slabs(struct kmem_cache_node *n)
> _
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/5] kmemleak: account for tagged pointers when calculating pointer range
  2019-02-11 21:59 ` [PATCH 3/5] kmemleak: account for tagged pointers when calculating pointer range Andrey Konovalov
@ 2019-02-15 14:05   ` Catalin Marinas
  0 siblings, 0 replies; 17+ messages in thread
From: Catalin Marinas @ 2019-02-15 14:05 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, kasan-dev, linux-mm, linux-kernel, Qian Cai,
	Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov

On Mon, Feb 11, 2019 at 10:59:52PM +0100, Andrey Konovalov wrote:
> kmemleak keeps two global variables, min_addr and max_addr, which store
> the range of valid (encountered by kmemleak) pointer values, which it
> later uses to speed up pointer lookup when scanning blocks.
> 
> With tagged pointers this range will get bigger than it needs to be.
> This patch makes kmemleak untag pointers before saving them to min_addr
> and max_addr and when performing a lookup.
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-02-15 14:05 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-11 21:59 [PATCH 0/5] kasan: more tag based mode fixes Andrey Konovalov
2019-02-11 21:59 ` [PATCH 1/5] kasan: fix assigning tags twice Andrey Konovalov
2019-02-11 21:59 ` [PATCH 2/5] kasan, kmemleak: pass tagged pointers to kmemleak Andrey Konovalov
2019-02-12 15:56   ` Vincenzo Frascino
2019-02-13 13:07     ` Andrey Konovalov
2019-02-11 21:59 ` [PATCH 3/5] kmemleak: account for tagged pointers when calculating pointer range Andrey Konovalov
2019-02-15 14:05   ` Catalin Marinas
2019-02-11 21:59 ` [PATCH 4/5] kasan, slub: move kasan_poison_slab hook before page_address Andrey Konovalov
2019-02-12 21:12   ` Andrew Morton
2019-02-13 13:25     ` Andrey Konovalov
2019-02-11 21:59 ` [PATCH 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED Andrey Konovalov
2019-02-12  2:43   ` Qian Cai
2019-02-12 13:26     ` Andrey Konovalov
2019-02-12 13:43       ` Qian Cai
2019-02-12 14:42         ` Andrey Konovalov
2019-02-12 16:07           ` Qian Cai
2019-02-13  2:15             ` Qian Cai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).