linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] kasan: more tag based mode fixes
@ 2019-02-13 13:58 Andrey Konovalov
  2019-02-13 13:58 ` [PATCH v2 1/5] kasan: fix assigning tags twice Andrey Konovalov
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Andrey Konovalov @ 2019-02-13 13:58 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm, linux-kernel
  Cc: Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov,
	Andrey Konovalov

Changes in v2:
- Add comments about kmemleak vs KASAN hooks order.
- Fix compilation error when CONFIG_SLUB_DEBUG is not defined.

Andrey Konovalov (5):
  kasan: fix assigning tags twice
  kasan, kmemleak: pass tagged pointers to kmemleak
  kmemleak: account for tagged pointers when calculating pointer range
  kasan, slub: move kasan_poison_slab hook before page_address
  kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED

 mm/kasan/common.c | 29 +++++++++++++++++------------
 mm/kmemleak.c     | 10 +++++++---
 mm/slab.h         |  7 +++----
 mm/slab_common.c  |  3 ++-
 mm/slub.c         | 43 +++++++++++++++++++++++++------------------
 5 files changed, 54 insertions(+), 38 deletions(-)

-- 
2.20.1.791.gb4d0f1c61a-goog


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/5] kasan: fix assigning tags twice
  2019-02-13 13:58 [PATCH v2 0/5] kasan: more tag based mode fixes Andrey Konovalov
@ 2019-02-13 13:58 ` Andrey Konovalov
  2019-02-13 13:58 ` [PATCH v2 2/5] kasan, kmemleak: pass tagged pointers to kmemleak Andrey Konovalov
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Andrey Konovalov @ 2019-02-13 13:58 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm, linux-kernel
  Cc: Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov,
	Andrey Konovalov

When an object is kmalloc()'ed, two hooks are called: kasan_slab_alloc()
and kasan_kmalloc(). Right now we assign a tag twice, once in each of
the hooks. Fix it by assigning a tag only in the former hook.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/common.c | 29 +++++++++++++++++------------
 1 file changed, 17 insertions(+), 12 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 73c9cbfdedf4..09b534fbba17 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -361,10 +361,15 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
  *    get different tags.
  */
 static u8 assign_tag(struct kmem_cache *cache, const void *object,
-			bool init, bool krealloc)
+			bool init, bool keep_tag)
 {
-	/* Reuse the same tag for krealloc'ed objects. */
-	if (krealloc)
+	/*
+	 * 1. When an object is kmalloc()'ed, two hooks are called:
+	 *    kasan_slab_alloc() and kasan_kmalloc(). We assign the
+	 *    tag only in the first one.
+	 * 2. We reuse the same tag for krealloc'ed objects.
+	 */
+	if (keep_tag)
 		return get_tag(object);
 
 	/*
@@ -405,12 +410,6 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
 	return (void *)object;
 }
 
-void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
-					gfp_t flags)
-{
-	return kasan_kmalloc(cache, object, cache->object_size, flags);
-}
-
 static inline bool shadow_invalid(u8 tag, s8 shadow_byte)
 {
 	if (IS_ENABLED(CONFIG_KASAN_GENERIC))
@@ -467,7 +466,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
 }
 
 static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
-				size_t size, gfp_t flags, bool krealloc)
+				size_t size, gfp_t flags, bool keep_tag)
 {
 	unsigned long redzone_start;
 	unsigned long redzone_end;
@@ -485,7 +484,7 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
 				KASAN_SHADOW_SCALE_SIZE);
 
 	if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
-		tag = assign_tag(cache, object, false, krealloc);
+		tag = assign_tag(cache, object, false, keep_tag);
 
 	/* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */
 	kasan_unpoison_shadow(set_tag(object, tag), size);
@@ -498,10 +497,16 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
 	return set_tag(object, tag);
 }
 
+void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
+					gfp_t flags)
+{
+	return __kasan_kmalloc(cache, object, cache->object_size, flags, false);
+}
+
 void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
 				size_t size, gfp_t flags)
 {
-	return __kasan_kmalloc(cache, object, size, flags, false);
+	return __kasan_kmalloc(cache, object, size, flags, true);
 }
 EXPORT_SYMBOL(kasan_kmalloc);
 
-- 
2.20.1.791.gb4d0f1c61a-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/5] kasan, kmemleak: pass tagged pointers to kmemleak
  2019-02-13 13:58 [PATCH v2 0/5] kasan: more tag based mode fixes Andrey Konovalov
  2019-02-13 13:58 ` [PATCH v2 1/5] kasan: fix assigning tags twice Andrey Konovalov
@ 2019-02-13 13:58 ` Andrey Konovalov
  2019-02-13 13:58 ` [PATCH v2 3/5] kmemleak: account for tagged pointers when calculating pointer range Andrey Konovalov
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Andrey Konovalov @ 2019-02-13 13:58 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm, linux-kernel
  Cc: Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov,
	Andrey Konovalov

Right now we call kmemleak hooks before assigning tags to pointers in
KASAN hooks. As a result, when an objects gets allocated, kmemleak sees
a differently tagged pointer, compared to the one it sees when the object
gets freed. Fix it by calling KASAN hooks before kmemleak's ones.

Reported-by: Qian Cai <cai@lca.pw>
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/slab.h        | 6 ++----
 mm/slab_common.c | 2 +-
 mm/slub.c        | 3 ++-
 3 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 4190c24ef0e9..638ea1b25d39 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -437,11 +437,9 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
 
 	flags &= gfp_allowed_mask;
 	for (i = 0; i < size; i++) {
-		void *object = p[i];
-
-		kmemleak_alloc_recursive(object, s->object_size, 1,
+		p[i] = kasan_slab_alloc(s, p[i], flags);
+		kmemleak_alloc_recursive(p[i], s->object_size, 1,
 					 s->flags, flags);
-		p[i] = kasan_slab_alloc(s, object, flags);
 	}
 
 	if (memcg_kmem_enabled())
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 81732d05e74a..fe524c8d0246 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1228,8 +1228,8 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	flags |= __GFP_COMP;
 	page = alloc_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
-	kmemleak_alloc(ret, size, 1, flags);
 	ret = kasan_kmalloc_large(ret, size, flags);
+	kmemleak_alloc(ret, size, 1, flags);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
diff --git a/mm/slub.c b/mm/slub.c
index 1e3d0ec4e200..4a3d7686902f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1374,8 +1374,9 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
  */
 static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
+	ptr = kasan_kmalloc_large(ptr, size, flags);
 	kmemleak_alloc(ptr, size, 1, flags);
-	return kasan_kmalloc_large(ptr, size, flags);
+	return ptr;
 }
 
 static __always_inline void kfree_hook(void *x)
-- 
2.20.1.791.gb4d0f1c61a-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/5] kmemleak: account for tagged pointers when calculating pointer range
  2019-02-13 13:58 [PATCH v2 0/5] kasan: more tag based mode fixes Andrey Konovalov
  2019-02-13 13:58 ` [PATCH v2 1/5] kasan: fix assigning tags twice Andrey Konovalov
  2019-02-13 13:58 ` [PATCH v2 2/5] kasan, kmemleak: pass tagged pointers to kmemleak Andrey Konovalov
@ 2019-02-13 13:58 ` Andrey Konovalov
  2019-02-13 15:36   ` Qian Cai
  2019-02-15 14:07   ` Catalin Marinas
  2019-02-13 13:58 ` [PATCH v2 4/5] kasan, slub: move kasan_poison_slab hook before page_address Andrey Konovalov
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 10+ messages in thread
From: Andrey Konovalov @ 2019-02-13 13:58 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm, linux-kernel
  Cc: Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov,
	Andrey Konovalov

kmemleak keeps two global variables, min_addr and max_addr, which store
the range of valid (encountered by kmemleak) pointer values, which it
later uses to speed up pointer lookup when scanning blocks.

With tagged pointers this range will get bigger than it needs to be.
This patch makes kmemleak untag pointers before saving them to min_addr
and max_addr and when performing a lookup.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kmemleak.c    | 10 +++++++---
 mm/slab.h        |  1 +
 mm/slab_common.c |  1 +
 mm/slub.c        |  1 +
 4 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index f9d9dc250428..707fa5579f66 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -574,6 +574,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
 	unsigned long flags;
 	struct kmemleak_object *object, *parent;
 	struct rb_node **link, *rb_parent;
+	unsigned long untagged_ptr;
 
 	object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
 	if (!object) {
@@ -619,8 +620,9 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
 
 	write_lock_irqsave(&kmemleak_lock, flags);
 
-	min_addr = min(min_addr, ptr);
-	max_addr = max(max_addr, ptr + size);
+	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
+	min_addr = min(min_addr, untagged_ptr);
+	max_addr = max(max_addr, untagged_ptr + size);
 	link = &object_tree_root.rb_node;
 	rb_parent = NULL;
 	while (*link) {
@@ -1333,6 +1335,7 @@ static void scan_block(void *_start, void *_end,
 	unsigned long *start = PTR_ALIGN(_start, BYTES_PER_POINTER);
 	unsigned long *end = _end - (BYTES_PER_POINTER - 1);
 	unsigned long flags;
+	unsigned long untagged_ptr;
 
 	read_lock_irqsave(&kmemleak_lock, flags);
 	for (ptr = start; ptr < end; ptr++) {
@@ -1347,7 +1350,8 @@ static void scan_block(void *_start, void *_end,
 		pointer = *ptr;
 		kasan_enable_current();
 
-		if (pointer < min_addr || pointer >= max_addr)
+		untagged_ptr = (unsigned long)kasan_reset_tag((void *)pointer);
+		if (untagged_ptr < min_addr || untagged_ptr >= max_addr)
 			continue;
 
 		/*
diff --git a/mm/slab.h b/mm/slab.h
index 638ea1b25d39..384105318779 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -438,6 +438,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
 	flags &= gfp_allowed_mask;
 	for (i = 0; i < size; i++) {
 		p[i] = kasan_slab_alloc(s, p[i], flags);
+		/* As p[i] might get tagged, call kmemleak hook after KASAN. */
 		kmemleak_alloc_recursive(p[i], s->object_size, 1,
 					 s->flags, flags);
 	}
diff --git a/mm/slab_common.c b/mm/slab_common.c
index fe524c8d0246..f9d89c1b5977 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1229,6 +1229,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	ret = kasan_kmalloc_large(ret, size, flags);
+	/* As ret might get tagged, call kmemleak hook after KASAN. */
 	kmemleak_alloc(ret, size, 1, flags);
 	return ret;
 }
diff --git a/mm/slub.c b/mm/slub.c
index 4a3d7686902f..f5a451c49190 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1375,6 +1375,7 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
 {
 	ptr = kasan_kmalloc_large(ptr, size, flags);
+	/* As ptr might get tagged, call kmemleak hook after KASAN. */
 	kmemleak_alloc(ptr, size, 1, flags);
 	return ptr;
 }
-- 
2.20.1.791.gb4d0f1c61a-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 4/5] kasan, slub: move kasan_poison_slab hook before page_address
  2019-02-13 13:58 [PATCH v2 0/5] kasan: more tag based mode fixes Andrey Konovalov
                   ` (2 preceding siblings ...)
  2019-02-13 13:58 ` [PATCH v2 3/5] kmemleak: account for tagged pointers when calculating pointer range Andrey Konovalov
@ 2019-02-13 13:58 ` Andrey Konovalov
  2019-02-13 13:58 ` [PATCH v2 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED Andrey Konovalov
  2019-02-13 20:41 ` [PATCH v2 0/5] kasan: more tag based mode fixes Andrew Morton
  5 siblings, 0 replies; 10+ messages in thread
From: Andrey Konovalov @ 2019-02-13 13:58 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm, linux-kernel
  Cc: Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov,
	Andrey Konovalov

With tag based KASAN page_address() looks at the page flags to see
whether the resulting pointer needs to have a tag set. Since we don't
want to set a tag when page_address() is called on SLAB pages, we call
page_kasan_tag_reset() in kasan_poison_slab(). However in allocate_slab()
page_address() is called before kasan_poison_slab(). Fix it by changing
the order.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/slub.c | 19 +++++++++++++++----
 1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index f5a451c49190..a7e7c7f719f9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1075,6 +1075,16 @@ static void setup_object_debug(struct kmem_cache *s, struct page *page,
 	init_tracking(s, object);
 }
 
+static void setup_page_debug(struct kmem_cache *s, void *addr, int order)
+{
+	if (!(s->flags & SLAB_POISON))
+		return;
+
+	metadata_access_enable();
+	memset(addr, POISON_INUSE, PAGE_SIZE << order);
+	metadata_access_disable();
+}
+
 static inline int alloc_consistency_checks(struct kmem_cache *s,
 					struct page *page,
 					void *object, unsigned long addr)
@@ -1330,6 +1340,8 @@ slab_flags_t kmem_cache_flags(unsigned int object_size,
 #else /* !CONFIG_SLUB_DEBUG */
 static inline void setup_object_debug(struct kmem_cache *s,
 			struct page *page, void *object) {}
+static inline void setup_page_debug(struct kmem_cache *s,
+			void *addr, int order) {}
 
 static inline int alloc_debug_processing(struct kmem_cache *s,
 	struct page *page, void *object, unsigned long addr) { return 0; }
@@ -1643,12 +1655,11 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (page_is_pfmemalloc(page))
 		SetPageSlabPfmemalloc(page);
 
-	start = page_address(page);
+	kasan_poison_slab(page);
 
-	if (unlikely(s->flags & SLAB_POISON))
-		memset(start, POISON_INUSE, PAGE_SIZE << order);
+	start = page_address(page);
 
-	kasan_poison_slab(page);
+	setup_page_debug(s, start, order);
 
 	shuffle = shuffle_freelist(s, page);
 
-- 
2.20.1.791.gb4d0f1c61a-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED
  2019-02-13 13:58 [PATCH v2 0/5] kasan: more tag based mode fixes Andrey Konovalov
                   ` (3 preceding siblings ...)
  2019-02-13 13:58 ` [PATCH v2 4/5] kasan, slub: move kasan_poison_slab hook before page_address Andrey Konovalov
@ 2019-02-13 13:58 ` Andrey Konovalov
  2019-02-13 20:41 ` [PATCH v2 0/5] kasan: more tag based mode fixes Andrew Morton
  5 siblings, 0 replies; 10+ messages in thread
From: Andrey Konovalov @ 2019-02-13 13:58 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm, linux-kernel
  Cc: Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov,
	Andrey Konovalov

CONFIG_SLAB_FREELIST_HARDENED hashes freelist pointer with the address
of the object where the pointer gets stored. With tag based KASAN we don't
account for that when building freelist, as we call set_freepointer() with
the first argument untagged. This patch changes the code to properly
propagate tags throughout the loop.

Reported-by: Qian Cai <cai@lca.pw>
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/slub.c | 20 +++++++-------------
 1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index a7e7c7f719f9..80da3a40b74d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -303,11 +303,6 @@ static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp)
 		__p < (__addr) + (__objects) * (__s)->size; \
 		__p += (__s)->size)
 
-#define for_each_object_idx(__p, __idx, __s, __addr, __objects) \
-	for (__p = fixup_red_left(__s, __addr), __idx = 1; \
-		__idx <= __objects; \
-		__p += (__s)->size, __idx++)
-
 /* Determine object index from a given position */
 static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr)
 {
@@ -1664,17 +1659,16 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	shuffle = shuffle_freelist(s, page);
 
 	if (!shuffle) {
-		for_each_object_idx(p, idx, s, start, page->objects) {
-			if (likely(idx < page->objects)) {
-				next = p + s->size;
-				next = setup_object(s, page, next);
-				set_freepointer(s, p, next);
-			} else
-				set_freepointer(s, p, NULL);
-		}
 		start = fixup_red_left(s, start);
 		start = setup_object(s, page, start);
 		page->freelist = start;
+		for (idx = 0, p = start; idx < page->objects - 1; idx++) {
+			next = p + s->size;
+			next = setup_object(s, page, next);
+			set_freepointer(s, p, next);
+			p = next;
+		}
+		set_freepointer(s, p, NULL);
 	}
 
 	page->inuse = page->objects;
-- 
2.20.1.791.gb4d0f1c61a-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/5] kmemleak: account for tagged pointers when calculating pointer range
  2019-02-13 13:58 ` [PATCH v2 3/5] kmemleak: account for tagged pointers when calculating pointer range Andrey Konovalov
@ 2019-02-13 15:36   ` Qian Cai
  2019-02-15 14:07   ` Catalin Marinas
  1 sibling, 0 replies; 10+ messages in thread
From: Qian Cai @ 2019-02-13 15:36 UTC (permalink / raw)
  To: Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, kasan-dev, linux-mm,
	linux-kernel
  Cc: Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov

On Wed, 2019-02-13 at 14:58 +0100, Andrey Konovalov wrote:
> kmemleak keeps two global variables, min_addr and max_addr, which store
> the range of valid (encountered by kmemleak) pointer values, which it
> later uses to speed up pointer lookup when scanning blocks.
> 
> With tagged pointers this range will get bigger than it needs to be.
> This patch makes kmemleak untag pointers before saving them to min_addr
> and max_addr and when performing a lookup.
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Tested-by: Qian Cai <cai@lca.pw>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 0/5] kasan: more tag based mode fixes
  2019-02-13 13:58 [PATCH v2 0/5] kasan: more tag based mode fixes Andrey Konovalov
                   ` (4 preceding siblings ...)
  2019-02-13 13:58 ` [PATCH v2 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED Andrey Konovalov
@ 2019-02-13 20:41 ` Andrew Morton
  2019-02-13 21:28   ` Andrey Konovalov
  5 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2019-02-13 20:41 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, kasan-dev, linux-mm, linux-kernel, Qian Cai,
	Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov

On Wed, 13 Feb 2019 14:58:25 +0100 Andrey Konovalov <andreyknvl@google.com> wrote:

> Changes in v2:
> - Add comments about kmemleak vs KASAN hooks order.

I assume this refers to Vincenzo's review of "kasan, kmemleak: pass
tagged pointers to kmemleak".  But v2 of that patch is unchanged.

> - Fix compilation error when CONFIG_SLUB_DEBUG is not defined.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 0/5] kasan: more tag based mode fixes
  2019-02-13 20:41 ` [PATCH v2 0/5] kasan: more tag based mode fixes Andrew Morton
@ 2019-02-13 21:28   ` Andrey Konovalov
  0 siblings, 0 replies; 10+ messages in thread
From: Andrey Konovalov @ 2019-02-13 21:28 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, kasan-dev, Linux Memory Management List, LKML,
	Qian Cai, Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov

On Wed, Feb 13, 2019 at 9:42 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Wed, 13 Feb 2019 14:58:25 +0100 Andrey Konovalov <andreyknvl@google.com> wrote:
>
> > Changes in v2:
> > - Add comments about kmemleak vs KASAN hooks order.
>
> I assume this refers to Vincenzo's review of "kasan, kmemleak: pass
> tagged pointers to kmemleak".  But v2 of that patch is unchanged.

I've accidentally squashed this change into commit #3 instead of #2 :(



>
> > - Fix compilation error when CONFIG_SLUB_DEBUG is not defined.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/5] kmemleak: account for tagged pointers when calculating pointer range
  2019-02-13 13:58 ` [PATCH v2 3/5] kmemleak: account for tagged pointers when calculating pointer range Andrey Konovalov
  2019-02-13 15:36   ` Qian Cai
@ 2019-02-15 14:07   ` Catalin Marinas
  1 sibling, 0 replies; 10+ messages in thread
From: Catalin Marinas @ 2019-02-15 14:07 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, kasan-dev, linux-mm, linux-kernel, Qian Cai,
	Vincenzo Frascino, Kostya Serebryany, Evgeniy Stepanov

On Wed, Feb 13, 2019 at 02:58:28PM +0100, Andrey Konovalov wrote:
> kmemleak keeps two global variables, min_addr and max_addr, which store
> the range of valid (encountered by kmemleak) pointer values, which it
> later uses to speed up pointer lookup when scanning blocks.
> 
> With tagged pointers this range will get bigger than it needs to be.
> This patch makes kmemleak untag pointers before saving them to min_addr
> and max_addr and when performing a lookup.
> 
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

I reviewed the old series. This patch also looks fine:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-02-15 14:07 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-13 13:58 [PATCH v2 0/5] kasan: more tag based mode fixes Andrey Konovalov
2019-02-13 13:58 ` [PATCH v2 1/5] kasan: fix assigning tags twice Andrey Konovalov
2019-02-13 13:58 ` [PATCH v2 2/5] kasan, kmemleak: pass tagged pointers to kmemleak Andrey Konovalov
2019-02-13 13:58 ` [PATCH v2 3/5] kmemleak: account for tagged pointers when calculating pointer range Andrey Konovalov
2019-02-13 15:36   ` Qian Cai
2019-02-15 14:07   ` Catalin Marinas
2019-02-13 13:58 ` [PATCH v2 4/5] kasan, slub: move kasan_poison_slab hook before page_address Andrey Konovalov
2019-02-13 13:58 ` [PATCH v2 5/5] kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED Andrey Konovalov
2019-02-13 20:41 ` [PATCH v2 0/5] kasan: more tag based mode fixes Andrew Morton
2019-02-13 21:28   ` Andrey Konovalov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).