Linux-ARM-Kernel Archive on lore.kernel.org
 help / Atom feed
* [PATCH v3 0/3] kasan: tag-based mode fixes
@ 2019-01-03 18:45 Andrey Konovalov
  2019-01-03 18:45 ` [PATCH v3 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning Andrey Konovalov
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Andrey Konovalov @ 2019-01-03 18:45 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton,
	Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin,
	Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart, Mike Rapoport,
	Vincenzo Frascino, kasan-dev, linux-doc, linux-kernel,
	linux-arm-kernel, linux-sparse, linux-mm, linux-kbuild
  Cc: Vishwath Mohan, Chintan Pandya, Jacob Bramley, Jann Horn,
	Ruben Ayrapetyan, Andrey Konovalov, Lee Smith, Kostya Serebryany,
	Mark Brand, Ramana Radhakrishnan, Evgeniy Stepanov

Changes in v3:
- Fixed krealloc tag assigning without changing kasan_kmalloc hook and
  added more comments to the assign_tag function while at it.
- Moved ARCH_SLAB_MINALIGN definition to arch/arm64/include/asm/cache.h.

Changes in v2:
- Added "kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY"
  patch.
- Added "kasan: fix krealloc handling for tag-based mode" patch.

Andrey Konovalov (3):
  kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning
  kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY
  kasan: fix krealloc handling for tag-based mode

 arch/arm64/include/asm/cache.h |  6 ++++
 mm/kasan/common.c              | 65 ++++++++++++++++++++++------------
 mm/slub.c                      |  2 ++
 3 files changed, 51 insertions(+), 22 deletions(-)

-- 
2.20.1.415.g653613c723-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v3 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning
  2019-01-03 18:45 [PATCH v3 0/3] kasan: tag-based mode fixes Andrey Konovalov
@ 2019-01-03 18:45 ` Andrey Konovalov
  2019-01-09 10:10   ` Vincenzo Frascino
  2019-01-03 18:45 ` [PATCH v3 2/3] kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY Andrey Konovalov
  2019-01-03 18:45 ` [PATCH v3 3/3] kasan: fix krealloc handling for tag-based mode Andrey Konovalov
  2 siblings, 1 reply; 6+ messages in thread
From: Andrey Konovalov @ 2019-01-03 18:45 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton,
	Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin,
	Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart, Mike Rapoport,
	Vincenzo Frascino, kasan-dev, linux-doc, linux-kernel,
	linux-arm-kernel, linux-sparse, linux-mm, linux-kbuild
  Cc: Vishwath Mohan, Chintan Pandya, Jacob Bramley, Jann Horn,
	Ruben Ayrapetyan, Andrey Konovalov, Lee Smith, Kostya Serebryany,
	Mark Brand, Ramana Radhakrishnan, Evgeniy Stepanov

Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.

Suggested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 arch/arm64/include/asm/cache.h | 6 ++++++
 mm/kasan/common.c              | 2 --
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index 13dd42c3ad4e..eb43e09c1980 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -58,6 +58,12 @@
  */
 #define ARCH_DMA_MINALIGN	(128)
 
+#ifdef CONFIG_KASAN_SW_TAGS
+#define ARCH_SLAB_MINALIGN	(1ULL << KASAN_SHADOW_SCALE_SHIFT)
+#else
+#define ARCH_SLAB_MINALIGN	__alignof__(unsigned long long)
+#endif
+
 #ifndef __ASSEMBLY__
 
 #include <linux/bitops.h>
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 03d5d1374ca7..44390392d4c9 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -298,8 +298,6 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
 		return;
 	}
 
-	cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);
-
 	*flags |= SLAB_KASAN;
 }
 
-- 
2.20.1.415.g653613c723-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v3 2/3] kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY
  2019-01-03 18:45 [PATCH v3 0/3] kasan: tag-based mode fixes Andrey Konovalov
  2019-01-03 18:45 ` [PATCH v3 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning Andrey Konovalov
@ 2019-01-03 18:45 ` Andrey Konovalov
  2019-01-03 18:45 ` [PATCH v3 3/3] kasan: fix krealloc handling for tag-based mode Andrey Konovalov
  2 siblings, 0 replies; 6+ messages in thread
From: Andrey Konovalov @ 2019-01-03 18:45 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton,
	Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin,
	Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart, Mike Rapoport,
	Vincenzo Frascino, kasan-dev, linux-doc, linux-kernel,
	linux-arm-kernel, linux-sparse, linux-mm, linux-kbuild
  Cc: Vishwath Mohan, Chintan Pandya, Jacob Bramley, Jann Horn,
	Ruben Ayrapetyan, Andrey Konovalov, Lee Smith, Kostya Serebryany,
	Mark Brand, Ramana Radhakrishnan, Evgeniy Stepanov

With CONFIG_HARDENED_USERCOPY enabled __check_heap_object() compares and
then subtracts a potentially tagged pointer with a non-tagged address of
the page that this pointer belongs to, which leads to unexpected behavior.

Untag the pointer in __check_heap_object() before doing any of these
operations.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/slub.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 36c0befeebd8..1e3d0ec4e200 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3846,6 +3846,8 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
 	unsigned int offset;
 	size_t object_size;
 
+	ptr = kasan_reset_tag(ptr);
+
 	/* Find object and usable object size. */
 	s = page->slab_cache;
 
-- 
2.20.1.415.g653613c723-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v3 3/3] kasan: fix krealloc handling for tag-based mode
  2019-01-03 18:45 [PATCH v3 0/3] kasan: tag-based mode fixes Andrey Konovalov
  2019-01-03 18:45 ` [PATCH v3 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning Andrey Konovalov
  2019-01-03 18:45 ` [PATCH v3 2/3] kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY Andrey Konovalov
@ 2019-01-03 18:45 ` Andrey Konovalov
  2 siblings, 0 replies; 6+ messages in thread
From: Andrey Konovalov @ 2019-01-03 18:45 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Catalin Marinas, Will Deacon, Christoph Lameter, Andrew Morton,
	Mark Rutland, Nick Desaulniers, Marc Zyngier, Dave Martin,
	Ard Biesheuvel, Eric W . Biederman, Ingo Molnar, Paul Lawrence,
	Geert Uytterhoeven, Arnd Bergmann, Kirill A . Shutemov,
	Greg Kroah-Hartman, Kate Stewart, Mike Rapoport,
	Vincenzo Frascino, kasan-dev, linux-doc, linux-kernel,
	linux-arm-kernel, linux-sparse, linux-mm, linux-kbuild
  Cc: Vishwath Mohan, Chintan Pandya, Jacob Bramley, Jann Horn,
	Ruben Ayrapetyan, Andrey Konovalov, Lee Smith, Kostya Serebryany,
	Mark Brand, Ramana Radhakrishnan, Evgeniy Stepanov

Right now tag-based KASAN can retag the memory that is reallocated via
krealloc and return a differently tagged pointer even if the same slab
object gets used and no reallocated technically happens.

There are a few issues with this approach. One is that krealloc callers
can't rely on comparing the return value with the passed argument to
check whether reallocation happened. Another is that if a caller knows
that no reallocation happened, that it can access object memory through
the old pointer, which leads to false positives. Look at nf_ct_ext_add()
to see an example.

Fix this by keeping the same tag if the memory don't actually gets
reallocated during krealloc.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/common.c | 63 ++++++++++++++++++++++++++++++++---------------
 1 file changed, 43 insertions(+), 20 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 44390392d4c9..73c9cbfdedf4 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -347,28 +347,43 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
 }
 
 /*
- * Since it's desirable to only call object contructors once during slab
- * allocation, we preassign tags to all such objects. Also preassign tags for
- * SLAB_TYPESAFE_BY_RCU slabs to avoid use-after-free reports.
- * For SLAB allocator we can't preassign tags randomly since the freelist is
- * stored as an array of indexes instead of a linked list. Assign tags based
- * on objects indexes, so that objects that are next to each other get
- * different tags.
- * After a tag is assigned, the object always gets allocated with the same tag.
- * The reason is that we can't change tags for objects with constructors on
- * reallocation (even for non-SLAB_TYPESAFE_BY_RCU), because the constructor
- * code can save the pointer to the object somewhere (e.g. in the object
- * itself). Then if we retag it, the old saved pointer will become invalid.
+ * This function assigns a tag to an object considering the following:
+ * 1. A cache might have a constructor, which might save a pointer to a slab
+ *    object somewhere (e.g. in the object itself). We preassign a tag for
+ *    each object in caches with constructors during slab creation and reuse
+ *    the same tag each time a particular object is allocated.
+ * 2. A cache might be SLAB_TYPESAFE_BY_RCU, which means objects can be
+ *    accessed after being freed. We preassign tags for objects in these
+ *    caches as well.
+ * 3. For SLAB allocator we can't preassign tags randomly since the freelist
+ *    is stored as an array of indexes instead of a linked list. Assign tags
+ *    based on objects indexes, so that objects that are next to each other
+ *    get different tags.
  */
-static u8 assign_tag(struct kmem_cache *cache, const void *object, bool new)
+static u8 assign_tag(struct kmem_cache *cache, const void *object,
+			bool init, bool krealloc)
 {
+	/* Reuse the same tag for krealloc'ed objects. */
+	if (krealloc)
+		return get_tag(object);
+
+	/*
+	 * If the cache neither has a constructor nor has SLAB_TYPESAFE_BY_RCU
+	 * set, assign a tag when the object is being allocated (init == false).
+	 */
 	if (!cache->ctor && !(cache->flags & SLAB_TYPESAFE_BY_RCU))
-		return new ? KASAN_TAG_KERNEL : random_tag();
+		return init ? KASAN_TAG_KERNEL : random_tag();
 
+	/* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */
 #ifdef CONFIG_SLAB
+	/* For SLAB assign tags based on the object index in the freelist. */
 	return (u8)obj_to_index(cache, virt_to_page(object), (void *)object);
 #else
-	return new ? random_tag() : get_tag(object);
+	/*
+	 * For SLUB assign a random tag during slab creation, otherwise reuse
+	 * the already assigned tag.
+	 */
+	return init ? random_tag() : get_tag(object);
 #endif
 }
 
@@ -384,7 +399,8 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
 	__memset(alloc_info, 0, sizeof(*alloc_info));
 
 	if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
-		object = set_tag(object, assign_tag(cache, object, true));
+		object = set_tag(object,
+				assign_tag(cache, object, true, false));
 
 	return (void *)object;
 }
@@ -450,8 +466,8 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
 	return __kasan_slab_free(cache, object, ip, true);
 }
 
-void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
-					size_t size, gfp_t flags)
+static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
+				size_t size, gfp_t flags, bool krealloc)
 {
 	unsigned long redzone_start;
 	unsigned long redzone_end;
@@ -469,7 +485,7 @@ void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
 				KASAN_SHADOW_SCALE_SIZE);
 
 	if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))
-		tag = assign_tag(cache, object, false);
+		tag = assign_tag(cache, object, false, krealloc);
 
 	/* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */
 	kasan_unpoison_shadow(set_tag(object, tag), size);
@@ -481,6 +497,12 @@ void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
 
 	return set_tag(object, tag);
 }
+
+void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
+				size_t size, gfp_t flags)
+{
+	return __kasan_kmalloc(cache, object, size, flags, false);
+}
 EXPORT_SYMBOL(kasan_kmalloc);
 
 void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
@@ -520,7 +542,8 @@ void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
 	if (unlikely(!PageSlab(page)))
 		return kasan_kmalloc_large(object, size, flags);
 	else
-		return kasan_kmalloc(page->slab_cache, object, size, flags);
+		return __kasan_kmalloc(page->slab_cache, object, size,
+						flags, true);
 }
 
 void kasan_poison_kfree(void *ptr, unsigned long ip)
-- 
2.20.1.415.g653613c723-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning
  2019-01-03 18:45 ` [PATCH v3 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning Andrey Konovalov
@ 2019-01-09 10:10   ` Vincenzo Frascino
  2019-01-11 13:48     ` Andrey Konovalov
  0 siblings, 1 reply; 6+ messages in thread
From: Vincenzo Frascino @ 2019-01-09 10:10 UTC (permalink / raw)
  To: Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Catalin Marinas, Will Deacon, Christoph Lameter,
	Andrew Morton, Mark Rutland, Nick Desaulniers, Marc Zyngier,
	Dave Martin, Ard Biesheuvel, Eric W . Biederman, Ingo Molnar,
	Paul Lawrence, Geert Uytterhoeven, Arnd Bergmann,
	Kirill A . Shutemov, Greg Kroah-Hartman, Kate Stewart,
	Mike Rapoport, kasan-dev, linux-doc, linux-kernel,
	linux-arm-kernel, linux-sparse, linux-mm, linux-kbuild
  Cc: Vishwath Mohan, Chintan Pandya, Jacob Bramley, Jann Horn,
	Ruben Ayrapetyan, Lee Smith, Kostya Serebryany, Mark Brand,
	Ramana Radhakrishnan, Evgeniy Stepanov

On 03/01/2019 18:45, Andrey Konovalov wrote:
> Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
> in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.
> 
> Suggested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> ---
>  arch/arm64/include/asm/cache.h | 6 ++++++
>  mm/kasan/common.c              | 2 --
>  2 files changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
> index 13dd42c3ad4e..eb43e09c1980 100644
> --- a/arch/arm64/include/asm/cache.h
> +++ b/arch/arm64/include/asm/cache.h
> @@ -58,6 +58,12 @@
>   */
>  #define ARCH_DMA_MINALIGN	(128)
>  
> +#ifdef CONFIG_KASAN_SW_TAGS
> +#define ARCH_SLAB_MINALIGN	(1ULL << KASAN_SHADOW_SCALE_SHIFT)
> +#else
> +#define ARCH_SLAB_MINALIGN	__alignof__(unsigned long long)
> +#endif
> +

Could you please remove the "#else" case here, because it is redundant (it is
defined in linux/slab.h as ifndef) and could be misleading in future?

>  #ifndef __ASSEMBLY__
>  
>  #include <linux/bitops.h>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 03d5d1374ca7..44390392d4c9 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -298,8 +298,6 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>  		return;
>  	}
>  
> -	cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);
> -
>  	*flags |= SLAB_KASAN;
>  }
>  
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning
  2019-01-09 10:10   ` Vincenzo Frascino
@ 2019-01-11 13:48     ` Andrey Konovalov
  0 siblings, 0 replies; 6+ messages in thread
From: Andrey Konovalov @ 2019-01-11 13:48 UTC (permalink / raw)
  To: Vincenzo Frascino
  Cc: Mark Rutland, Kate Stewart, open list:DOCUMENTATION,
	Catalin Marinas, Will Deacon, Paul Lawrence,
	Linux Memory Management List, Alexander Potapenko,
	Chintan Pandya, Christoph Lameter, Ingo Molnar, Jacob Bramley,
	Ruben Ayrapetyan, Mark Brand, kasan-dev, linux-sparse,
	Geert Uytterhoeven, Linux ARM, Andrey Ryabinin, Dave Martin,
	Evgeniy Stepanov, Vishwath Mohan, Arnd Bergmann,
	Linux Kbuild mailing list, Marc Zyngier, Ramana Radhakrishnan,
	Mike Rapoport, Dmitry Vyukov, Kostya Serebryany, Jann Horn,
	Ard Biesheuvel, Greg Kroah-Hartman, Nick Desaulniers, LKML,
	Eric W . Biederman, Lee Smith, Andrew Morton,
	Kirill A . Shutemov

On Wed, Jan 9, 2019 at 11:10 AM Vincenzo Frascino
<vincenzo.frascino@arm.com> wrote:
>
> On 03/01/2019 18:45, Andrey Konovalov wrote:
> > Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
> > in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.
> >
> > Suggested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> > Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
> > ---
> >  arch/arm64/include/asm/cache.h | 6 ++++++
> >  mm/kasan/common.c              | 2 --
> >  2 files changed, 6 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
> > index 13dd42c3ad4e..eb43e09c1980 100644
> > --- a/arch/arm64/include/asm/cache.h
> > +++ b/arch/arm64/include/asm/cache.h
> > @@ -58,6 +58,12 @@
> >   */
> >  #define ARCH_DMA_MINALIGN    (128)
> >
> > +#ifdef CONFIG_KASAN_SW_TAGS
> > +#define ARCH_SLAB_MINALIGN   (1ULL << KASAN_SHADOW_SCALE_SHIFT)
> > +#else
> > +#define ARCH_SLAB_MINALIGN   __alignof__(unsigned long long)
> > +#endif
> > +
>
> Could you please remove the "#else" case here, because it is redundant (it is
> defined in linux/slab.h as ifndef) and could be misleading in future?

Sure, sent a patch. Thanks!

>
> >  #ifndef __ASSEMBLY__
> >
> >  #include <linux/bitops.h>
> > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > index 03d5d1374ca7..44390392d4c9 100644
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -298,8 +298,6 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> >               return;
> >       }
> >
> > -     cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);
> > -
> >       *flags |= SLAB_KASAN;
> >  }
> >
> >
>
> --
> Regards,
> Vincenzo
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/fc93e5a4-fa54-98a1-ea5f-4708568d7857%40arm.com.
> For more options, visit https://groups.google.com/d/optout.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, back to index

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-03 18:45 [PATCH v3 0/3] kasan: tag-based mode fixes Andrey Konovalov
2019-01-03 18:45 ` [PATCH v3 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning Andrey Konovalov
2019-01-09 10:10   ` Vincenzo Frascino
2019-01-11 13:48     ` Andrey Konovalov
2019-01-03 18:45 ` [PATCH v3 2/3] kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY Andrey Konovalov
2019-01-03 18:45 ` [PATCH v3 3/3] kasan: fix krealloc handling for tag-based mode Andrey Konovalov

Linux-ARM-Kernel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-arm-kernel/0 linux-arm-kernel/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-arm-kernel linux-arm-kernel/ https://lore.kernel.org/linux-arm-kernel \
		linux-arm-kernel@lists.infradead.org infradead-linux-arm-kernel@archiver.kernel.org
	public-inbox-index linux-arm-kernel


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-arm-kernel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox