linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] kasan: tag-based mode fixes
@ 2019-01-02 17:36 Andrey Konovalov
  2019-01-02 17:36 ` [PATCH v2 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning Andrey Konovalov
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Andrey Konovalov @ 2019-01-02 17:36 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton,
	Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin,
	Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart, Mike Rapoport,
	Vincenzo Frascino, kasan-dev, linux-doc, linux-kernel,
	linux-arm-kernel, linux-sparse, linux-mm, linux-kbuild
  Cc: Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn,
	Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov

Hi Andrew,

This patchset includes an updated "kasan, arm64: use ARCH_SLAB_MINALIGN
instead of manual aligning" patch and fixes for two more issues that
were uncovered while testing with a variety of different config options
enabled.

Thanks!

Andrey Konovalov (3):
  kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning
  kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY
  kasan: fix krealloc handling for tag-based mode

 arch/arm64/include/asm/kasan.h |  4 ++++
 include/linux/kasan.h          | 14 +++++---------
 include/linux/slab.h           |  5 +++--
 mm/kasan/common.c              | 22 ++++++++++++----------
 mm/slab.c                      |  8 ++++----
 mm/slab_common.c               |  2 +-
 mm/slub.c                      | 12 +++++++-----
 7 files changed, 36 insertions(+), 31 deletions(-)

-- 
2.20.1.415.g653613c723-goog


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning
  2019-01-02 17:36 [PATCH v2 0/3] kasan: tag-based mode fixes Andrey Konovalov
@ 2019-01-02 17:36 ` Andrey Konovalov
  2019-01-02 20:14   ` Andrew Morton
  2019-01-02 17:36 ` [PATCH v2 2/3] kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY Andrey Konovalov
  2019-01-02 17:36 ` [PATCH v2 3/3] kasan: fix krealloc handling for tag-based mode Andrey Konovalov
  2 siblings, 1 reply; 8+ messages in thread
From: Andrey Konovalov @ 2019-01-02 17:36 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton,
	Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin,
	Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart, Mike Rapoport,
	Vincenzo Frascino, kasan-dev, linux-doc, linux-kernel,
	linux-arm-kernel, linux-sparse, linux-mm, linux-kbuild
  Cc: Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn,
	Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov

Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.

Suggested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 arch/arm64/include/asm/kasan.h | 4 ++++
 include/linux/slab.h           | 1 +
 mm/kasan/common.c              | 2 --
 3 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index b52aacd2c526..ba26150d578d 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -36,6 +36,10 @@
 #define KASAN_SHADOW_OFFSET     (KASAN_SHADOW_END - (1ULL << \
 					(64 - KASAN_SHADOW_SCALE_SHIFT)))
 
+#ifdef CONFIG_KASAN_SW_TAGS
+#define ARCH_SLAB_MINALIGN	(1ULL << KASAN_SHADOW_SCALE_SHIFT)
+#endif
+
 void kasan_init(void);
 void kasan_copy_shadow(pgd_t *pgdir);
 asmlinkage void kasan_early_init(void);
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 11b45f7ae405..d87f913ab4e8 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -16,6 +16,7 @@
 #include <linux/overflow.h>
 #include <linux/types.h>
 #include <linux/workqueue.h>
+#include <linux/kasan.h>
 
 
 /*
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 03d5d1374ca7..44390392d4c9 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -298,8 +298,6 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
 		return;
 	}
 
-	cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);
-
 	*flags |= SLAB_KASAN;
 }
 
-- 
2.20.1.415.g653613c723-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/3] kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY
  2019-01-02 17:36 [PATCH v2 0/3] kasan: tag-based mode fixes Andrey Konovalov
  2019-01-02 17:36 ` [PATCH v2 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning Andrey Konovalov
@ 2019-01-02 17:36 ` Andrey Konovalov
  2019-01-02 17:36 ` [PATCH v2 3/3] kasan: fix krealloc handling for tag-based mode Andrey Konovalov
  2 siblings, 0 replies; 8+ messages in thread
From: Andrey Konovalov @ 2019-01-02 17:36 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton,
	Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin,
	Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart, Mike Rapoport,
	Vincenzo Frascino, kasan-dev, linux-doc, linux-kernel,
	linux-arm-kernel, linux-sparse, linux-mm, linux-kbuild
  Cc: Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn,
	Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov

With CONFIG_HARDENED_USERCOPY enabled __check_heap_object() compares and
then subtracts a potentially tagged pointer with a non-tagged address of
the page that this pointer belongs to, which leads to unexpected behavior.

Untag the pointer in __check_heap_object() before doing any of these
operations.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/slub.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 36c0befeebd8..1e3d0ec4e200 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3846,6 +3846,8 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
 	unsigned int offset;
 	size_t object_size;
 
+	ptr = kasan_reset_tag(ptr);
+
 	/* Find object and usable object size. */
 	s = page->slab_cache;
 
-- 
2.20.1.415.g653613c723-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 3/3] kasan: fix krealloc handling for tag-based mode
  2019-01-02 17:36 [PATCH v2 0/3] kasan: tag-based mode fixes Andrey Konovalov
  2019-01-02 17:36 ` [PATCH v2 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning Andrey Konovalov
  2019-01-02 17:36 ` [PATCH v2 2/3] kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY Andrey Konovalov
@ 2019-01-02 17:36 ` Andrey Konovalov
  2019-01-02 22:14   ` kbuild test robot
  2 siblings, 1 reply; 8+ messages in thread
From: Andrey Konovalov @ 2019-01-02 17:36 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton,
	Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin,
	Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart, Mike Rapoport,
	Vincenzo Frascino, kasan-dev, linux-doc, linux-kernel,
	linux-arm-kernel, linux-sparse, linux-mm, linux-kbuild
  Cc: Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn,
	Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov

Right now tag-based KASAN can retag the memory that is reallocated via
krealloc and return a differently tagged pointer even if the same slab
object gets used and no reallocated technically happens.

There are a few issues with this approach. One is that krealloc callers
can't rely on comparing the return value with the passed argument to
check whether reallocation happened. Another is that if a caller knows
that no reallocation happened, that it can access object memory through
the old pointer, which leads to false positives. Look at nf_ct_ext_add()
to see an example.

Fix this by keeping the same tag if the memory don't actually gets
reallocated during krealloc.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/kasan.h | 14 +++++---------
 include/linux/slab.h  |  4 ++--
 mm/kasan/common.c     | 20 ++++++++++++--------
 mm/slab.c             |  8 ++++----
 mm/slab_common.c      |  2 +-
 mm/slub.c             | 10 +++++-----
 6 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index b40ea104dd36..7576fff90923 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -57,9 +57,8 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
 void kasan_kfree_large(void *ptr, unsigned long ip);
 void kasan_poison_kfree(void *ptr, unsigned long ip);
 void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
-					size_t size, gfp_t flags);
-void * __must_check kasan_krealloc(const void *object, size_t new_size,
-					gfp_t flags);
+				size_t size, gfp_t flags, bool krealloc);
+void kasan_krealloc(const void *object, size_t new_size, gfp_t flags);
 
 void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
 					gfp_t flags);
@@ -118,15 +117,12 @@ static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
 static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
 static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
 static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
-				size_t size, gfp_t flags)
-{
-	return (void *)object;
-}
-static inline void *kasan_krealloc(const void *object, size_t new_size,
-				 gfp_t flags)
+				size_t size, gfp_t flags, bool krealloc)
 {
 	return (void *)object;
 }
+static inline void kasan_krealloc(const void *object, size_t new_size,
+				 gfp_t flags) {}
 
 static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
 				   gfp_t flags)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index d87f913ab4e8..1cd168758c05 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -445,7 +445,7 @@ static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s,
 {
 	void *ret = kmem_cache_alloc(s, flags);
 
-	ret = kasan_kmalloc(s, ret, size, flags);
+	ret = kasan_kmalloc(s, ret, size, flags, false);
 	return ret;
 }
 
@@ -456,7 +456,7 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s,
 {
 	void *ret = kmem_cache_alloc_node(s, gfpflags, node);
 
-	ret = kasan_kmalloc(s, ret, size, gfpflags);
+	ret = kasan_kmalloc(s, ret, size, gfpflags, false);
 	return ret;
 }
 #endif /* CONFIG_TRACING */
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 44390392d4c9..b6633ab86160 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -392,7 +392,7 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
 void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
 					gfp_t flags)
 {
-	return kasan_kmalloc(cache, object, cache->object_size, flags);
+	return kasan_kmalloc(cache, object, cache->object_size, flags, false);
 }
 
 static inline bool shadow_invalid(u8 tag, s8 shadow_byte)
@@ -451,7 +451,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
 }
 
 void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
-					size_t size, gfp_t flags)
+				size_t size, gfp_t flags, bool krealloc)
 {
 	unsigned long redzone_start;
 	unsigned long redzone_end;
@@ -468,8 +468,12 @@ void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
 	redzone_end = round_up((unsigned long)object + cache->object_size,
 				KASAN_SHADOW_SCALE_SIZE);
 
-	if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
-		tag = assign_tag(cache, object, false);
+	if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) {
+		if (krealloc)
+			tag = get_tag(object);
+		else
+			tag = assign_tag(cache, object, false);
+	}
 
 	/* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */
 	kasan_unpoison_shadow(set_tag(object, tag), size);
@@ -508,19 +512,19 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
 	return (void *)ptr;
 }
 
-void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
+void kasan_krealloc(const void *object, size_t size, gfp_t flags)
 {
 	struct page *page;
 
 	if (unlikely(object == ZERO_SIZE_PTR))
-		return (void *)object;
+		return;
 
 	page = virt_to_head_page(object);
 
 	if (unlikely(!PageSlab(page)))
-		return kasan_kmalloc_large(object, size, flags);
+		kasan_kmalloc_large(object, size, flags);
 	else
-		return kasan_kmalloc(page->slab_cache, object, size, flags);
+		kasan_kmalloc(page->slab_cache, object, size, flags, true);
 }
 
 void kasan_poison_kfree(void *ptr, unsigned long ip)
diff --git a/mm/slab.c b/mm/slab.c
index 73fe23e649c9..09b54386cf67 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3604,7 +3604,7 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size)
 
 	ret = slab_alloc(cachep, flags, _RET_IP_);
 
-	ret = kasan_kmalloc(cachep, ret, size, flags);
+	ret = kasan_kmalloc(cachep, ret, size, flags, false);
 	trace_kmalloc(_RET_IP_, ret,
 		      size, cachep->size, flags);
 	return ret;
@@ -3647,7 +3647,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep,
 
 	ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_);
 
-	ret = kasan_kmalloc(cachep, ret, size, flags);
+	ret = kasan_kmalloc(cachep, ret, size, flags, false);
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, cachep->size,
 			   flags, nodeid);
@@ -3668,7 +3668,7 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller)
 	if (unlikely(ZERO_OR_NULL_PTR(cachep)))
 		return cachep;
 	ret = kmem_cache_alloc_node_trace(cachep, flags, node, size);
-	ret = kasan_kmalloc(cachep, ret, size, flags);
+	ret = kasan_kmalloc(cachep, ret, size, flags, false);
 
 	return ret;
 }
@@ -3706,7 +3706,7 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags,
 		return cachep;
 	ret = slab_alloc(cachep, flags, caller);
 
-	ret = kasan_kmalloc(cachep, ret, size, flags);
+	ret = kasan_kmalloc(cachep, ret, size, flags, false);
 	trace_kmalloc(caller, ret,
 		      size, cachep->size, flags);
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 81732d05e74a..b55c58178f83 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1507,7 +1507,7 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 		ks = ksize(p);
 
 	if (ks >= new_size) {
-		p = kasan_krealloc((void *)p, new_size, flags);
+		kasan_krealloc((void *)p, new_size, flags);
 		return (void *)p;
 	}
 
diff --git a/mm/slub.c b/mm/slub.c
index 1e3d0ec4e200..20aa0547acbf 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2763,7 +2763,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
-	ret = kasan_kmalloc(s, ret, size, gfpflags);
+	ret = kasan_kmalloc(s, ret, size, gfpflags, false);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2791,7 +2791,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
 
-	ret = kasan_kmalloc(s, ret, size, gfpflags);
+	ret = kasan_kmalloc(s, ret, size, gfpflags, false);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -3364,7 +3364,7 @@ static void early_kmem_cache_node_alloc(int node)
 	init_tracking(kmem_cache_node, n);
 #endif
 	n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node),
-		      GFP_KERNEL);
+		      GFP_KERNEL, false);
 	page->freelist = get_freepointer(kmem_cache_node, n);
 	page->inuse = 1;
 	page->frozen = 0;
@@ -3779,7 +3779,7 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
-	ret = kasan_kmalloc(s, ret, size, flags);
+	ret = kasan_kmalloc(s, ret, size, flags, false);
 
 	return ret;
 }
@@ -3823,7 +3823,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
-	ret = kasan_kmalloc(s, ret, size, flags);
+	ret = kasan_kmalloc(s, ret, size, flags, false);
 
 	return ret;
 }
-- 
2.20.1.415.g653613c723-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning
  2019-01-02 17:36 ` [PATCH v2 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning Andrey Konovalov
@ 2019-01-02 20:14   ` Andrew Morton
  2019-01-03 15:52     ` Will Deacon
  2019-01-03 18:46     ` Andrey Konovalov
  0 siblings, 2 replies; 8+ messages in thread
From: Andrew Morton @ 2019-01-02 20:14 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Mark Rutland,
	Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel,
	Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart, Mike Rapoport,
	Vincenzo Frascino, kasan-dev, linux-doc, linux-kernel,
	linux-arm-kernel, linux-sparse, linux-mm, linux-kbuild,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn,
	Mark Brand, Chintan Pandya, Vishwath Mohan

On Wed,  2 Jan 2019 18:36:06 +0100 Andrey Konovalov <andreyknvl@google.com> wrote:

> Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
> in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.
> 
> ...
>
> --- a/arch/arm64/include/asm/kasan.h
> +++ b/arch/arm64/include/asm/kasan.h
> @@ -36,6 +36,10 @@
>  #define KASAN_SHADOW_OFFSET     (KASAN_SHADOW_END - (1ULL << \
>  					(64 - KASAN_SHADOW_SCALE_SHIFT)))
>  
> +#ifdef CONFIG_KASAN_SW_TAGS
> +#define ARCH_SLAB_MINALIGN	(1ULL << KASAN_SHADOW_SCALE_SHIFT)
> +#endif
> +
>  void kasan_init(void);
>  void kasan_copy_shadow(pgd_t *pgdir);
>  asmlinkage void kasan_early_init(void);
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 11b45f7ae405..d87f913ab4e8 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -16,6 +16,7 @@
>  #include <linux/overflow.h>
>  #include <linux/types.h>
>  #include <linux/workqueue.h>
> +#include <linux/kasan.h>
>  

This still seems unadvisable.  Like other architectures, arm defines
ARCH_SLAB_MINALIGN in arch/arm/include/asm/cache.h. 
arch/arm/include/asm64/cache.h doesn't define ARCH_SLAB_MINALIGN
afaict.

If arch/arm/include/asm64/cache.h later gets a definition of
ARCH_SLAB_MINALIGN then we again face the risk that different .c files
will see different values of ARCH_SLAB_MINALIGN depending on which
headers they include.

So what to say about this?  The architecture's ARCH_SLAB_MINALIGN
should be defined in the architecture's cache.h, end of story.  Not in
slab.h, not in kasan.h.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/3] kasan: fix krealloc handling for tag-based mode
  2019-01-02 17:36 ` [PATCH v2 3/3] kasan: fix krealloc handling for tag-based mode Andrey Konovalov
@ 2019-01-02 22:14   ` kbuild test robot
  0 siblings, 0 replies; 8+ messages in thread
From: kbuild test robot @ 2019-01-02 22:14 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: kbuild-all, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton,
	Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin,
	Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart, Mike Rapoport,
	Vincenzo Frascino, kasan-dev, linux-doc, linux-kernel,
	linux-arm-kernel, linux-sparse, linux-mm, linux-kbuild,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn,
	Mark Brand, Chintan Pandya, Vishwath Mohan, Andrey Konovalov

[-- Attachment #1: Type: text/plain, Size: 1832 bytes --]

Hi Andrey,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on next-20190102]
[cannot apply to v4.20]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Andrey-Konovalov/kasan-tag-based-mode-fixes/20190103-050707
config: x86_64-randconfig-x016-201900 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All warnings (new ones prefixed by >>):

   mm/kasan/common.c: In function 'kasan_krealloc':
>> mm/kasan/common.c:525:3: warning: ignoring return value of 'kasan_kmalloc_large', declared with attribute warn_unused_result [-Wunused-result]
      kasan_kmalloc_large(object, size, flags);
      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> mm/kasan/common.c:527:3: warning: ignoring return value of 'kasan_kmalloc', declared with attribute warn_unused_result [-Wunused-result]
      kasan_kmalloc(page->slab_cache, object, size, flags, true);
      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

vim +/kasan_kmalloc_large +525 mm/kasan/common.c

   514	
   515	void kasan_krealloc(const void *object, size_t size, gfp_t flags)
   516	{
   517		struct page *page;
   518	
   519		if (unlikely(object == ZERO_SIZE_PTR))
   520			return;
   521	
   522		page = virt_to_head_page(object);
   523	
   524		if (unlikely(!PageSlab(page)))
 > 525			kasan_kmalloc_large(object, size, flags);
   526		else
 > 527			kasan_kmalloc(page->slab_cache, object, size, flags, true);
   528	}
   529	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 29147 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning
  2019-01-02 20:14   ` Andrew Morton
@ 2019-01-03 15:52     ` Will Deacon
  2019-01-03 18:46     ` Andrey Konovalov
  1 sibling, 0 replies; 8+ messages in thread
From: Will Deacon @ 2019-01-03 15:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Catalin Marinas, Christoph Lameter, Mark Rutland,
	Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel,
	Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart, Mike Rapoport,
	Vincenzo Frascino, kasan-dev, linux-doc, linux-kernel,
	linux-arm-kernel, linux-sparse, linux-mm, linux-kbuild,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Jann Horn,
	Mark Brand, Chintan Pandya, Vishwath Mohan

On Wed, Jan 02, 2019 at 12:14:36PM -0800, Andrew Morton wrote:
> On Wed,  2 Jan 2019 18:36:06 +0100 Andrey Konovalov <andreyknvl@google.com> wrote:
> 
> > Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
> > in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.
> > 
> > ...
> >
> > --- a/arch/arm64/include/asm/kasan.h
> > +++ b/arch/arm64/include/asm/kasan.h
> > @@ -36,6 +36,10 @@
> >  #define KASAN_SHADOW_OFFSET     (KASAN_SHADOW_END - (1ULL << \
> >  					(64 - KASAN_SHADOW_SCALE_SHIFT)))
> >  
> > +#ifdef CONFIG_KASAN_SW_TAGS
> > +#define ARCH_SLAB_MINALIGN	(1ULL << KASAN_SHADOW_SCALE_SHIFT)
> > +#endif
> > +
> >  void kasan_init(void);
> >  void kasan_copy_shadow(pgd_t *pgdir);
> >  asmlinkage void kasan_early_init(void);
> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > index 11b45f7ae405..d87f913ab4e8 100644
> > --- a/include/linux/slab.h
> > +++ b/include/linux/slab.h
> > @@ -16,6 +16,7 @@
> >  #include <linux/overflow.h>
> >  #include <linux/types.h>
> >  #include <linux/workqueue.h>
> > +#include <linux/kasan.h>
> >  
> 
> This still seems unadvisable.  Like other architectures, arm defines
> ARCH_SLAB_MINALIGN in arch/arm/include/asm/cache.h. 
> arch/arm/include/asm64/cache.h doesn't define ARCH_SLAB_MINALIGN
> afaict.
> 
> If arch/arm/include/asm64/cache.h later gets a definition of
> ARCH_SLAB_MINALIGN then we again face the risk that different .c files
> will see different values of ARCH_SLAB_MINALIGN depending on which
> headers they include.
> 
> So what to say about this?  The architecture's ARCH_SLAB_MINALIGN
> should be defined in the architecture's cache.h, end of story.  Not in
> slab.h, not in kasan.h.

Agreed. Also, as far as I can tell, this patch isn't actually a fix (unlike
the other two in this series) so it should be harmless to drop it for now.

Will

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning
  2019-01-02 20:14   ` Andrew Morton
  2019-01-03 15:52     ` Will Deacon
@ 2019-01-03 18:46     ` Andrey Konovalov
  1 sibling, 0 replies; 8+ messages in thread
From: Andrey Konovalov @ 2019-01-03 18:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Mark Rutland,
	Nick Desaulniers, Marc Zyngier, Dave Martin, Ard Biesheuvel,
	Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart, Mike Rapoport,
	Vincenzo Frascino, kasan-dev, open list:DOCUMENTATION, LKML,
	Linux ARM, linux-sparse, Linux Memory Management List,
	Linux Kbuild mailing list, Kostya Serebryany, Evgeniy Stepanov,
	Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan,
	Jann Horn, Mark Brand, Chintan Pandya, Vishwath Mohan

On Wed, Jan 2, 2019 at 9:14 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Wed,  2 Jan 2019 18:36:06 +0100 Andrey Konovalov <andreyknvl@google.com> wrote:
>
> > Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
> > in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.
> >
> > ...
> >
> > --- a/arch/arm64/include/asm/kasan.h
> > +++ b/arch/arm64/include/asm/kasan.h
> > @@ -36,6 +36,10 @@
> >  #define KASAN_SHADOW_OFFSET     (KASAN_SHADOW_END - (1ULL << \
> >                                       (64 - KASAN_SHADOW_SCALE_SHIFT)))
> >
> > +#ifdef CONFIG_KASAN_SW_TAGS
> > +#define ARCH_SLAB_MINALIGN   (1ULL << KASAN_SHADOW_SCALE_SHIFT)
> > +#endif
> > +
> >  void kasan_init(void);
> >  void kasan_copy_shadow(pgd_t *pgdir);
> >  asmlinkage void kasan_early_init(void);
> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > index 11b45f7ae405..d87f913ab4e8 100644
> > --- a/include/linux/slab.h
> > +++ b/include/linux/slab.h
> > @@ -16,6 +16,7 @@
> >  #include <linux/overflow.h>
> >  #include <linux/types.h>
> >  #include <linux/workqueue.h>
> > +#include <linux/kasan.h>
> >
>
> This still seems unadvisable.  Like other architectures, arm defines
> ARCH_SLAB_MINALIGN in arch/arm/include/asm/cache.h.
> arch/arm/include/asm64/cache.h doesn't define ARCH_SLAB_MINALIGN
> afaict.
>
> If arch/arm/include/asm64/cache.h later gets a definition of
> ARCH_SLAB_MINALIGN then we again face the risk that different .c files
> will see different values of ARCH_SLAB_MINALIGN depending on which
> headers they include.
>
> So what to say about this?  The architecture's ARCH_SLAB_MINALIGN
> should be defined in the architecture's cache.h, end of story.  Not in
> slab.h, not in kasan.h.

Done in v3, thanks!

>
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20190102121436.5c2b72d1b0ec49affadc9692%40linux-foundation.org.
> For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-01-03 18:46 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-02 17:36 [PATCH v2 0/3] kasan: tag-based mode fixes Andrey Konovalov
2019-01-02 17:36 ` [PATCH v2 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning Andrey Konovalov
2019-01-02 20:14   ` Andrew Morton
2019-01-03 15:52     ` Will Deacon
2019-01-03 18:46     ` Andrey Konovalov
2019-01-02 17:36 ` [PATCH v2 2/3] kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY Andrey Konovalov
2019-01-02 17:36 ` [PATCH v2 3/3] kasan: fix krealloc handling for tag-based mode Andrey Konovalov
2019-01-02 22:14   ` kbuild test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).