linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB
  2016-06-08 18:40 [PATCH] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB Alexander Potapenko
@ 2016-06-08 18:08 ` kbuild test robot
  2016-06-08 19:02 ` kbuild test robot
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: kbuild test robot @ 2016-06-08 18:08 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: kbuild-all, adech.fo, cl, dvyukov, akpm, rostedt, iamjoonsoo.kim,
	js1304, kcc, aryabinin, kuthonuzo.luruo, kasan-dev, linux-mm,
	linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2277 bytes --]

Hi,

[auto build test WARNING on v4.7-rc2]
[cannot apply to next-20160608]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Alexander-Potapenko/mm-kasan-switch-SLUB-to-stackdepot-enable-memory-quarantine-for-SLUB/20160609-024216
config: m68k-m5475evb_defconfig (attached as .config)
compiler: m68k-linux-gcc (GCC) 4.9.0
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=m68k 

All warnings (new ones prefixed by >>):

   mm/slub.c: In function 'calculate_sizes':
>> mm/slub.c:3357:2: warning: passing argument 2 of 'kasan_cache_create' from incompatible pointer type
     kasan_cache_create(s, &size, &s->flags);
     ^
   In file included from include/linux/slab.h:127:0,
                    from mm/slub.c:18:
   include/linux/kasan.h:91:20: note: expected 'size_t *' but argument is of type 'long unsigned int *'
    static inline void kasan_cache_create(struct kmem_cache *cache,
                       ^

vim +/kasan_cache_create +3357 mm/slub.c

  3341		if (flags & SLAB_RED_ZONE) {
  3342			/*
  3343			 * Add some empty padding so that we can catch
  3344			 * overwrites from earlier objects rather than let
  3345			 * tracking information or the free pointer be
  3346			 * corrupted if a user writes before the start
  3347			 * of the object.
  3348			 */
  3349			size += sizeof(void *);
  3350	
  3351			s->red_left_pad = sizeof(void *);
  3352			s->red_left_pad = ALIGN(s->red_left_pad, s->align);
  3353			size += s->red_left_pad;
  3354		}
  3355	#endif
  3356	
> 3357		kasan_cache_create(s, &size, &s->flags);
  3358	
  3359		/*
  3360		 * SLUB stores one object immediately after another beginning from
  3361		 * offset 0. In order to align the objects we have to simply size
  3362		 * each object to conform to the alignment.
  3363		 */
  3364		size = ALIGN(size, s->align);
  3365		s->size = size;

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 6141 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB
@ 2016-06-08 18:40 Alexander Potapenko
  2016-06-08 18:08 ` kbuild test robot
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Alexander Potapenko @ 2016-06-08 18:40 UTC (permalink / raw)
  To: adech.fo, cl, dvyukov, akpm, rostedt, iamjoonsoo.kim, js1304,
	kcc, aryabinin, kuthonuzo.luruo
  Cc: kasan-dev, linux-mm, linux-kernel

For KASAN builds:
 - switch SLUB allocator to using stackdepot instead of storing the
   allocation/deallocation stacks in the objects;
 - define SLAB_RED_ZONE, SLAB_POISON, SLAB_STORE_USER to zero,
   effectively disabling these debug features, as they're redundant in
   the presence of KASAN;
 - refactor the slab freelist hook, put freed memory into the quarantine.

Signed-off-by: Alexander Potapenko <glider@google.com>
---
 include/linux/slab.h     |  9 ++++++
 include/linux/slub_def.h |  4 +++
 lib/Kconfig.kasan        |  4 +--
 mm/kasan/Makefile        |  3 +-
 mm/kasan/kasan.c         | 78 +++++++++++++++++++++++++++++++++---------------
 mm/kasan/kasan.h         |  2 +-
 mm/kasan/quarantine.c    |  5 ++++
 mm/kasan/report.c        |  8 ++---
 mm/slab.h                | 10 +++++++
 mm/slub.c                | 56 +++++++++++++++++++++++++---------
 10 files changed, 131 insertions(+), 48 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index aeb3e6d..fe91eef 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -21,11 +21,20 @@
  * The ones marked DEBUG are only valid if CONFIG_DEBUG_SLAB is set.
  */
 #define SLAB_CONSISTENCY_CHECKS	0x00000100UL	/* DEBUG: Perform (expensive) checks on alloc/free */
+#ifndef CONFIG_KASAN
 #define SLAB_RED_ZONE		0x00000400UL	/* DEBUG: Red zone objs in a cache */
 #define SLAB_POISON		0x00000800UL	/* DEBUG: Poison objects */
+#else
+#define SLAB_RED_ZONE		0x00000000UL	/* KASAN has its own redzones */
+#define SLAB_POISON		0x00000000UL	/* No extra poisoning */
+#endif
 #define SLAB_HWCACHE_ALIGN	0x00002000UL	/* Align objs on cache lines */
 #define SLAB_CACHE_DMA		0x00004000UL	/* Use GFP_DMA memory */
+#ifndef CONFIG_KASAN
 #define SLAB_STORE_USER		0x00010000UL	/* DEBUG: Store the last owner for bug hunting */
+#else
+#define SLAB_STORE_USER		0x00000000UL	/* KASAN uses stack depot */
+#endif
 #define SLAB_PANIC		0x00040000UL	/* Panic if kmem_cache_create() fails */
 /*
  * SLAB_DESTROY_BY_RCU - **WARNING** READ THIS!
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d1faa01..5585598 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -99,6 +99,10 @@ struct kmem_cache {
 	 */
 	int remote_node_defrag_ratio;
 #endif
+#ifdef CONFIG_KASAN
+	struct kasan_cache kasan_info;
+#endif
+
 	struct kmem_cache_node *node[MAX_NUMNODES];
 };
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 67d8c68..bd38aab 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,9 +5,9 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "KASan: runtime memory debugger"
-	depends on SLUB_DEBUG || (SLAB && !DEBUG_SLAB)
+	depends on SLUB || (SLAB && !DEBUG_SLAB)
 	select CONSTRUCTORS
-	select STACKDEPOT if SLAB
+	select STACKDEPOT
 	help
 	  Enables kernel address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
index 1548749..2976a9e 100644
--- a/mm/kasan/Makefile
+++ b/mm/kasan/Makefile
@@ -7,5 +7,4 @@ CFLAGS_REMOVE_kasan.o = -pg
 # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
 CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
 
-obj-y := kasan.o report.o kasan_init.o
-obj-$(CONFIG_SLAB) += quarantine.o
+obj-y := kasan.o report.o kasan_init.o quarantine.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 18b6a2b..69d7718 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -351,7 +351,6 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
-#ifdef CONFIG_SLAB
 /*
  * Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
  * For larger allocations larger redzones are used.
@@ -372,17 +371,21 @@ static size_t optimal_redzone(size_t object_size)
 void kasan_cache_create(struct kmem_cache *cache, size_t *size,
 			unsigned long *flags)
 {
-	int redzone_adjust;
-	/* Make sure the adjusted size is still less than
-	 * KMALLOC_MAX_CACHE_SIZE.
-	 * TODO: this check is only useful for SLAB, but not SLUB. We'll need
-	 * to skip it for SLUB when it starts using kasan_cache_create().
+	int redzone_adjust, orig_size = *size;
+
+#ifdef CONFIG_SLAB
+	/*
+	 * Make sure the adjusted size is still less than
+	 * KMALLOC_MAX_CACHE_SIZE, i.e. we don't use the page allocator.
 	 */
+
 	if (*size > KMALLOC_MAX_CACHE_SIZE -
 	    sizeof(struct kasan_alloc_meta) -
 	    sizeof(struct kasan_free_meta))
 		return;
+#endif
 	*flags |= SLAB_KASAN;
+
 	/* Add alloc meta. */
 	cache->kasan_info.alloc_meta_offset = *size;
 	*size += sizeof(struct kasan_alloc_meta);
@@ -392,17 +395,37 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
 	    cache->object_size < sizeof(struct kasan_free_meta)) {
 		cache->kasan_info.free_meta_offset = *size;
 		*size += sizeof(struct kasan_free_meta);
+	} else {
+		cache->kasan_info.free_meta_offset = 0;
 	}
 	redzone_adjust = optimal_redzone(cache->object_size) -
 		(*size - cache->object_size);
+
 	if (redzone_adjust > 0)
 		*size += redzone_adjust;
+
+#ifdef CONFIG_SLAB
 	*size = min(KMALLOC_MAX_CACHE_SIZE,
 		    max(*size,
 			cache->object_size +
 			optimal_redzone(cache->object_size)));
-}
+	/*
+	 * If the metadata doesn't fit, disable KASAN at all.
+	 */
+	if (*size <= cache->kasan_info.alloc_meta_offset ||
+			*size <= cache->kasan_info.free_meta_offset) {
+		*flags &= ~SLAB_KASAN;
+		*size = orig_size;
+		cache->kasan_info.alloc_meta_offset = -1;
+		cache->kasan_info.free_meta_offset = -1;
+	}
+#else
+	*size = max(*size,
+			cache->object_size +
+			optimal_redzone(cache->object_size));
+
 #endif
+}
 
 void kasan_cache_shrink(struct kmem_cache *cache)
 {
@@ -431,16 +454,14 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
 	kasan_poison_shadow(object,
 			round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
 			KASAN_KMALLOC_REDZONE);
-#ifdef CONFIG_SLAB
 	if (cache->flags & SLAB_KASAN) {
 		struct kasan_alloc_meta *alloc_info =
 			get_alloc_info(cache, object);
-		alloc_info->state = KASAN_STATE_INIT;
+		if (alloc_info)
+			alloc_info->state = KASAN_STATE_INIT;
 	}
-#endif
 }
 
-#ifdef CONFIG_SLAB
 static inline int in_irqentry_text(unsigned long ptr)
 {
 	return (ptr >= (unsigned long)&__irqentry_text_start &&
@@ -492,6 +513,8 @@ struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
 					const void *object)
 {
 	BUILD_BUG_ON(sizeof(struct kasan_alloc_meta) > 32);
+	if (cache->kasan_info.alloc_meta_offset == -1)
+		return NULL;
 	return (void *)object + cache->kasan_info.alloc_meta_offset;
 }
 
@@ -499,9 +522,10 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
 				      const void *object)
 {
 	BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
+	if (cache->kasan_info.free_meta_offset == -1)
+		return NULL;
 	return (void *)object + cache->kasan_info.free_meta_offset;
 }
-#endif
 
 void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
 {
@@ -522,7 +546,6 @@ void kasan_poison_slab_free(struct kmem_cache *cache, void *object)
 
 bool kasan_slab_free(struct kmem_cache *cache, void *object)
 {
-#ifdef CONFIG_SLAB
 	/* RCU slabs could be legally used after free within the RCU period */
 	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
 		return false;
@@ -532,7 +555,10 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
 			get_alloc_info(cache, object);
 		struct kasan_free_meta *free_info =
 			get_free_info(cache, object);
-
+		WARN_ON(!alloc_info);
+		WARN_ON(!free_info);
+		if (!alloc_info || !free_info)
+			return;
 		switch (alloc_info->state) {
 		case KASAN_STATE_ALLOC:
 			alloc_info->state = KASAN_STATE_QUARANTINE;
@@ -550,10 +576,6 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
 		}
 	}
 	return false;
-#else
-	kasan_poison_slab_free(cache, object);
-	return false;
-#endif
 }
 
 void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
@@ -568,24 +590,29 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
 	if (unlikely(object == NULL))
 		return;
 
+	if (!(cache->flags & SLAB_KASAN))
+		return;
+
 	redzone_start = round_up((unsigned long)(object + size),
 				KASAN_SHADOW_SCALE_SIZE);
 	redzone_end = round_up((unsigned long)object + cache->object_size,
 				KASAN_SHADOW_SCALE_SIZE);
 
 	kasan_unpoison_shadow(object, size);
+	WARN_ON(redzone_start > redzone_end);
+	if (redzone_start > redzone_end)
+		return;
 	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
 		KASAN_KMALLOC_REDZONE);
-#ifdef CONFIG_SLAB
 	if (cache->flags & SLAB_KASAN) {
 		struct kasan_alloc_meta *alloc_info =
 			get_alloc_info(cache, object);
-
-		alloc_info->state = KASAN_STATE_ALLOC;
-		alloc_info->alloc_size = size;
-		set_track(&alloc_info->track, flags);
+		if (alloc_info) {
+			alloc_info->state = KASAN_STATE_ALLOC;
+			alloc_info->alloc_size = size;
+			set_track(&alloc_info->track, flags);
+		}
 	}
-#endif
 }
 EXPORT_SYMBOL(kasan_kmalloc);
 
@@ -607,6 +634,9 @@ void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
 	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
 
 	kasan_unpoison_shadow(ptr, size);
+	WARN_ON(redzone_start > redzone_end);
+	if (redzone_start > redzone_end)
+		return;
 	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
 		KASAN_PAGE_REDZONE);
 }
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index fb87923..8c75953 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -110,7 +110,7 @@ static inline bool kasan_report_enabled(void)
 void kasan_report(unsigned long addr, size_t size,
 		bool is_write, unsigned long ip);
 
-#ifdef CONFIG_SLAB
+#if defined(CONFIG_SLAB) || defined(CONFIG_SLUB)
 void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
 void quarantine_reduce(void);
 void quarantine_remove_cache(struct kmem_cache *cache);
diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index 4973505..89259c2 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -149,7 +149,12 @@ static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache)
 
 	local_irq_save(flags);
 	alloc_info->state = KASAN_STATE_FREE;
+#ifdef CONFIG_SLAB
 	___cache_free(cache, object, _THIS_IP_);
+#elif defined(CONFIG_SLUB)
+	do_slab_free(cache, virt_to_head_page(object), object, NULL, 1,
+		_RET_IP_);
+#endif
 	local_irq_restore(flags);
 }
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index b3c122d..861b977 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -116,7 +116,6 @@ static inline bool init_task_stack_addr(const void *addr)
 			sizeof(init_thread_union.stack));
 }
 
-#ifdef CONFIG_SLAB
 static void print_track(struct kasan_track *track)
 {
 	pr_err("PID = %u\n", track->pid);
@@ -130,8 +129,8 @@ static void print_track(struct kasan_track *track)
 	}
 }
 
-static void object_err(struct kmem_cache *cache, struct page *page,
-			void *object, char *unused_reason)
+static void kasan_object_err(struct kmem_cache *cache, struct page *page,
+				void *object, char *unused_reason)
 {
 	struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
 	struct kasan_free_meta *free_info;
@@ -162,7 +161,6 @@ static void object_err(struct kmem_cache *cache, struct page *page,
 		break;
 	}
 }
-#endif
 
 static void print_address_description(struct kasan_access_info *info)
 {
@@ -177,7 +175,7 @@ static void print_address_description(struct kasan_access_info *info)
 			struct kmem_cache *cache = page->slab_cache;
 			object = nearest_obj(cache, page,
 						(void *)info->access_addr);
-			object_err(cache, page, object,
+			kasan_object_err(cache, page, object,
 					"kasan: bad access detected");
 			return;
 		}
diff --git a/mm/slab.h b/mm/slab.h
index dedb1a9..fde1fea 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -366,6 +366,10 @@ static inline size_t slab_ksize(const struct kmem_cache *s)
 	if (s->flags & (SLAB_RED_ZONE | SLAB_POISON))
 		return s->object_size;
 # endif
+# ifdef CONFIG_KASAN
+	if (s->flags & SLAB_KASAN)
+		return s->object_size;
+# endif
 	/*
 	 * If we have the need to store the freelist pointer
 	 * back there or track user information then we can
@@ -462,6 +466,12 @@ void *slab_next(struct seq_file *m, void *p, loff_t *pos);
 void slab_stop(struct seq_file *m, void *p);
 int memcg_slab_show(struct seq_file *m, void *p);
 
+#if defined(CONFIG_SLAB)
 void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr);
+#elif defined(CONFIG_SLUB)
+void do_slab_free(struct kmem_cache *s,
+		struct page *page, void *head, void *tail,
+		int cnt, unsigned long addr);
+#endif
 
 #endif /* MM_SLAB_H */
diff --git a/mm/slub.c b/mm/slub.c
index 825ff45..74966c2 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -191,7 +191,11 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
 #define MAX_OBJS_PER_PAGE	32767 /* since page.objects is u15 */
 
 /* Internal SLUB flags */
+#ifndef CONFIG_KASAN
 #define __OBJECT_POISON		0x80000000UL /* Poison object */
+#else
+#define __OBJECT_POISON		0x00000000UL /* Disable object poisoning */
+#endif
 #define __CMPXCHG_DOUBLE	0x40000000UL /* Use cmpxchg_double */
 
 #ifdef CONFIG_SMP
@@ -454,10 +458,8 @@ static inline void *restore_red_left(struct kmem_cache *s, void *p)
  */
 #if defined(CONFIG_SLUB_DEBUG_ON)
 static int slub_debug = DEBUG_DEFAULT_FLAGS;
-#elif defined(CONFIG_KASAN)
-static int slub_debug = SLAB_STORE_USER;
 #else
-static int slub_debug;
+static int slub_debug = SLAB_STORE_USER;
 #endif
 
 static char *slub_debug_slabs;
@@ -1322,7 +1324,7 @@ static inline void kfree_hook(const void *x)
 	kasan_kfree_large(x);
 }
 
-static inline void slab_free_hook(struct kmem_cache *s, void *x)
+static inline bool slab_free_hook(struct kmem_cache *s, void *x)
 {
 	kmemleak_free_recursive(x, s->flags);
 
@@ -1344,11 +1346,11 @@ static inline void slab_free_hook(struct kmem_cache *s, void *x)
 	if (!(s->flags & SLAB_DEBUG_OBJECTS))
 		debug_check_no_obj_freed(x, s->object_size);
 
-	kasan_slab_free(s, x);
+	return kasan_slab_free(s, x);
 }
 
 static inline void slab_free_freelist_hook(struct kmem_cache *s,
-					   void *head, void *tail)
+					   void **head, void **tail, int *cnt)
 {
 /*
  * Compiler cannot detect this function can be removed if slab_free_hook()
@@ -1360,13 +1362,27 @@ static inline void slab_free_freelist_hook(struct kmem_cache *s,
 	defined(CONFIG_DEBUG_OBJECTS_FREE) ||	\
 	defined(CONFIG_KASAN)
 
-	void *object = head;
-	void *tail_obj = tail ? : head;
+	void *object = *head, *prev = NULL, *next = NULL;
+	void *tail_obj = *tail ? : *head;
+	bool skip = false;
 
 	do {
-		slab_free_hook(s, object);
-	} while ((object != tail_obj) &&
-		 (object = get_freepointer(s, object)));
+		skip = slab_free_hook(s, object);
+		next = (object != tail_obj) ?
+			get_freepointer(s, object) : NULL;
+		if (skip) {
+			if (!prev)
+				*head = next;
+			else
+				set_freepointer(s, prev, next);
+			if (object == tail_obj)
+				*tail = prev;
+			(*cnt)--;
+		} else {
+			prev = object;
+		}
+		object = next;
+	} while (next);
 #endif
 }
 
@@ -2772,12 +2788,22 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page,
 				      void *head, void *tail, int cnt,
 				      unsigned long addr)
 {
+	void *free_head = head, *free_tail = tail;
+
+	slab_free_freelist_hook(s, &free_head, &free_tail, &cnt);
+	/* slab_free_freelist_hook() could have emptied the freelist. */
+	if (cnt == 0)
+		return;
+	do_slab_free(s, page, free_head, free_tail, cnt, addr);
+}
+
+__always_inline void do_slab_free(struct kmem_cache *s,
+				struct page *page, void *head, void *tail,
+				int cnt, unsigned long addr)
+{
 	void *tail_obj = tail ? : head;
 	struct kmem_cache_cpu *c;
 	unsigned long tid;
-
-	slab_free_freelist_hook(s, head, tail);
-
 redo:
 	/*
 	 * Determine the currently cpus per cpu slab.
@@ -3328,6 +3354,8 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 	}
 #endif
 
+	kasan_cache_create(s, &size, &s->flags);
+
 	/*
 	 * SLUB stores one object immediately after another beginning from
 	 * offset 0. In order to align the objects we have to simply size
-- 
2.8.0.rc3.226.g39d4020

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB
  2016-06-08 18:40 [PATCH] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB Alexander Potapenko
  2016-06-08 18:08 ` kbuild test robot
@ 2016-06-08 19:02 ` kbuild test robot
  2016-06-08 19:23 ` kbuild test robot
  2016-06-09 16:45 ` Andrey Ryabinin
  3 siblings, 0 replies; 8+ messages in thread
From: kbuild test robot @ 2016-06-08 19:02 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: kbuild-all, adech.fo, cl, dvyukov, akpm, rostedt, iamjoonsoo.kim,
	js1304, kcc, aryabinin, kuthonuzo.luruo, kasan-dev, linux-mm,
	linux-kernel

[-- Attachment #1: Type: text/plain, Size: 7835 bytes --]

Hi,

[auto build test WARNING on v4.7-rc2]
[cannot apply to next-20160608]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Alexander-Potapenko/mm-kasan-switch-SLUB-to-stackdepot-enable-memory-quarantine-for-SLUB/20160609-024216
config: x86_64-allmodconfig (attached as .config)
compiler: gcc-6 (Debian 6.1.1-1) 6.1.1 20160430
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All warnings (new ones prefixed by >>):

   mm/kasan/kasan.c: In function 'kasan_cache_create':
>> mm/kasan/kasan.c:374:22: warning: unused variable 'orig_size' [-Wunused-variable]
     int redzone_adjust, orig_size = *size;
                         ^~~~~~~~~
   mm/kasan/kasan.c: In function 'kasan_slab_free':
>> mm/kasan/kasan.c:561:4: warning: 'return' with no value, in function returning non-void [-Wreturn-type]
       return;
       ^~~~~~
   mm/kasan/kasan.c:547:6: note: declared here
    bool kasan_slab_free(struct kmem_cache *cache, void *object)
         ^~~~~~~~~~~~~~~

vim +/orig_size +374 mm/kasan/kasan.c

   368		return rz;
   369	}
   370	
   371	void kasan_cache_create(struct kmem_cache *cache, size_t *size,
   372				unsigned long *flags)
   373	{
 > 374		int redzone_adjust, orig_size = *size;
   375	
   376	#ifdef CONFIG_SLAB
   377		/*
   378		 * Make sure the adjusted size is still less than
   379		 * KMALLOC_MAX_CACHE_SIZE, i.e. we don't use the page allocator.
   380		 */
   381	
   382		if (*size > KMALLOC_MAX_CACHE_SIZE -
   383		    sizeof(struct kasan_alloc_meta) -
   384		    sizeof(struct kasan_free_meta))
   385			return;
   386	#endif
   387		*flags |= SLAB_KASAN;
   388	
   389		/* Add alloc meta. */
   390		cache->kasan_info.alloc_meta_offset = *size;
   391		*size += sizeof(struct kasan_alloc_meta);
   392	
   393		/* Add free meta. */
   394		if (cache->flags & SLAB_DESTROY_BY_RCU || cache->ctor ||
   395		    cache->object_size < sizeof(struct kasan_free_meta)) {
   396			cache->kasan_info.free_meta_offset = *size;
   397			*size += sizeof(struct kasan_free_meta);
   398		} else {
   399			cache->kasan_info.free_meta_offset = 0;
   400		}
   401		redzone_adjust = optimal_redzone(cache->object_size) -
   402			(*size - cache->object_size);
   403	
   404		if (redzone_adjust > 0)
   405			*size += redzone_adjust;
   406	
   407	#ifdef CONFIG_SLAB
   408		*size = min(KMALLOC_MAX_CACHE_SIZE,
   409			    max(*size,
   410				cache->object_size +
   411				optimal_redzone(cache->object_size)));
   412		/*
   413		 * If the metadata doesn't fit, disable KASAN at all.
   414		 */
   415		if (*size <= cache->kasan_info.alloc_meta_offset ||
   416				*size <= cache->kasan_info.free_meta_offset) {
   417			*flags &= ~SLAB_KASAN;
   418			*size = orig_size;
   419			cache->kasan_info.alloc_meta_offset = -1;
   420			cache->kasan_info.free_meta_offset = -1;
   421		}
   422	#else
   423		*size = max(*size,
   424				cache->object_size +
   425				optimal_redzone(cache->object_size));
   426	
   427	#endif
   428	}
   429	
   430	void kasan_cache_shrink(struct kmem_cache *cache)
   431	{
   432		quarantine_remove_cache(cache);
   433	}
   434	
   435	void kasan_cache_destroy(struct kmem_cache *cache)
   436	{
   437		quarantine_remove_cache(cache);
   438	}
   439	
   440	void kasan_poison_slab(struct page *page)
   441	{
   442		kasan_poison_shadow(page_address(page),
   443				PAGE_SIZE << compound_order(page),
   444				KASAN_KMALLOC_REDZONE);
   445	}
   446	
   447	void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
   448	{
   449		kasan_unpoison_shadow(object, cache->object_size);
   450	}
   451	
   452	void kasan_poison_object_data(struct kmem_cache *cache, void *object)
   453	{
   454		kasan_poison_shadow(object,
   455				round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
   456				KASAN_KMALLOC_REDZONE);
   457		if (cache->flags & SLAB_KASAN) {
   458			struct kasan_alloc_meta *alloc_info =
   459				get_alloc_info(cache, object);
   460			if (alloc_info)
   461				alloc_info->state = KASAN_STATE_INIT;
   462		}
   463	}
   464	
   465	static inline int in_irqentry_text(unsigned long ptr)
   466	{
   467		return (ptr >= (unsigned long)&__irqentry_text_start &&
   468			ptr < (unsigned long)&__irqentry_text_end) ||
   469			(ptr >= (unsigned long)&__softirqentry_text_start &&
   470			 ptr < (unsigned long)&__softirqentry_text_end);
   471	}
   472	
   473	static inline void filter_irq_stacks(struct stack_trace *trace)
   474	{
   475		int i;
   476	
   477		if (!trace->nr_entries)
   478			return;
   479		for (i = 0; i < trace->nr_entries; i++)
   480			if (in_irqentry_text(trace->entries[i])) {
   481				/* Include the irqentry function into the stack. */
   482				trace->nr_entries = i + 1;
   483				break;
   484			}
   485	}
   486	
   487	static inline depot_stack_handle_t save_stack(gfp_t flags)
   488	{
   489		unsigned long entries[KASAN_STACK_DEPTH];
   490		struct stack_trace trace = {
   491			.nr_entries = 0,
   492			.entries = entries,
   493			.max_entries = KASAN_STACK_DEPTH,
   494			.skip = 0
   495		};
   496	
   497		save_stack_trace(&trace);
   498		filter_irq_stacks(&trace);
   499		if (trace.nr_entries != 0 &&
   500		    trace.entries[trace.nr_entries-1] == ULONG_MAX)
   501			trace.nr_entries--;
   502	
   503		return depot_save_stack(&trace, flags);
   504	}
   505	
   506	static inline void set_track(struct kasan_track *track, gfp_t flags)
   507	{
   508		track->pid = current->pid;
   509		track->stack = save_stack(flags);
   510	}
   511	
   512	struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
   513						const void *object)
   514	{
   515		BUILD_BUG_ON(sizeof(struct kasan_alloc_meta) > 32);
   516		if (cache->kasan_info.alloc_meta_offset == -1)
   517			return NULL;
   518		return (void *)object + cache->kasan_info.alloc_meta_offset;
   519	}
   520	
   521	struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
   522					      const void *object)
   523	{
   524		BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
   525		if (cache->kasan_info.free_meta_offset == -1)
   526			return NULL;
   527		return (void *)object + cache->kasan_info.free_meta_offset;
   528	}
   529	
   530	void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
   531	{
   532		kasan_kmalloc(cache, object, cache->object_size, flags);
   533	}
   534	
   535	void kasan_poison_slab_free(struct kmem_cache *cache, void *object)
   536	{
   537		unsigned long size = cache->object_size;
   538		unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
   539	
   540		/* RCU slabs could be legally used after free within the RCU period */
   541		if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
   542			return;
   543	
   544		kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
   545	}
   546	
   547	bool kasan_slab_free(struct kmem_cache *cache, void *object)
   548	{
   549		/* RCU slabs could be legally used after free within the RCU period */
   550		if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
   551			return false;
   552	
   553		if (likely(cache->flags & SLAB_KASAN)) {
   554			struct kasan_alloc_meta *alloc_info =
   555				get_alloc_info(cache, object);
   556			struct kasan_free_meta *free_info =
   557				get_free_info(cache, object);
   558			WARN_ON(!alloc_info);
   559			WARN_ON(!free_info);
   560			if (!alloc_info || !free_info)
 > 561				return;
   562			switch (alloc_info->state) {
   563			case KASAN_STATE_ALLOC:
   564				alloc_info->state = KASAN_STATE_QUARANTINE;

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 54758 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB
  2016-06-08 18:40 [PATCH] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB Alexander Potapenko
  2016-06-08 18:08 ` kbuild test robot
  2016-06-08 19:02 ` kbuild test robot
@ 2016-06-08 19:23 ` kbuild test robot
  2016-06-09 16:45 ` Andrey Ryabinin
  3 siblings, 0 replies; 8+ messages in thread
From: kbuild test robot @ 2016-06-08 19:23 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: kbuild-all, adech.fo, cl, dvyukov, akpm, rostedt, iamjoonsoo.kim,
	js1304, kcc, aryabinin, kuthonuzo.luruo, kasan-dev, linux-mm,
	linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1865 bytes --]

Hi,

[auto build test ERROR on v4.7-rc2]
[cannot apply to next-20160608]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Alexander-Potapenko/mm-kasan-switch-SLUB-to-stackdepot-enable-memory-quarantine-for-SLUB/20160609-024216
config: parisc-c3000_defconfig (attached as .config)
compiler: hppa-linux-gnu-gcc (Debian 5.3.1-8) 5.3.1 20160205
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=parisc 

All errors (new ones prefixed by >>):

   mm/slub.c: In function 'calculate_sizes':
>> mm/slub.c:3357:24: error: passing argument 2 of 'kasan_cache_create' from incompatible pointer type [-Werror=incompatible-pointer-types]
     kasan_cache_create(s, &size, &s->flags);
                           ^
   In file included from include/linux/slab.h:127:0,
                    from mm/slub.c:18:
   include/linux/kasan.h:91:20: note: expected 'size_t * {aka unsigned int *}' but argument is of type 'long unsigned int *'
    static inline void kasan_cache_create(struct kmem_cache *cache,
                       ^
   cc1: some warnings being treated as errors

vim +/kasan_cache_create +3357 mm/slub.c

  3351			s->red_left_pad = sizeof(void *);
  3352			s->red_left_pad = ALIGN(s->red_left_pad, s->align);
  3353			size += s->red_left_pad;
  3354		}
  3355	#endif
  3356	
> 3357		kasan_cache_create(s, &size, &s->flags);
  3358	
  3359		/*
  3360		 * SLUB stores one object immediately after another beginning from

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 6322 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB
  2016-06-08 18:40 [PATCH] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB Alexander Potapenko
                   ` (2 preceding siblings ...)
  2016-06-08 19:23 ` kbuild test robot
@ 2016-06-09 16:45 ` Andrey Ryabinin
  2016-06-09 18:22   ` Alexander Potapenko
  3 siblings, 1 reply; 8+ messages in thread
From: Andrey Ryabinin @ 2016-06-09 16:45 UTC (permalink / raw)
  To: Alexander Potapenko, adech.fo, cl, dvyukov, akpm, rostedt,
	iamjoonsoo.kim, js1304, kcc, kuthonuzo.luruo
  Cc: kasan-dev, linux-mm, linux-kernel



On 06/08/2016 09:40 PM, Alexander Potapenko wrote:
> For KASAN builds:
>  - switch SLUB allocator to using stackdepot instead of storing the
>    allocation/deallocation stacks in the objects;
>  - define SLAB_RED_ZONE, SLAB_POISON, SLAB_STORE_USER to zero,
>    effectively disabling these debug features, as they're redundant in
>    the presence of KASAN;

Instead of having duplicated functionality, I think it might be better to switch SLAB_STORE_USER to stackdepot instead.
Because now, we have two piles of code which do basically the same thing, but
do differently.

>  - refactor the slab freelist hook, put freed memory into the quarantine.
> 

What you did with slab_freelist_hook() is not refactoring, it's an obfuscation.


>  }
>  
> -#ifdef CONFIG_SLAB
>  /*
>   * Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
>   * For larger allocations larger redzones are used.
> @@ -372,17 +371,21 @@ static size_t optimal_redzone(size_t object_size)
>  void kasan_cache_create(struct kmem_cache *cache, size_t *size,
>  			unsigned long *flags)
>  {
> -	int redzone_adjust;
> -	/* Make sure the adjusted size is still less than
> -	 * KMALLOC_MAX_CACHE_SIZE.
> -	 * TODO: this check is only useful for SLAB, but not SLUB. We'll need
> -	 * to skip it for SLUB when it starts using kasan_cache_create().
> +	int redzone_adjust, orig_size = *size;
> +
> +#ifdef CONFIG_SLAB
> +	/*
> +	 * Make sure the adjusted size is still less than
> +	 * KMALLOC_MAX_CACHE_SIZE, i.e. we don't use the page allocator.
>  	 */
> +
>  	if (*size > KMALLOC_MAX_CACHE_SIZE -

This is wrong. You probably wanted KMALLOC_MAX_SIZE here.

However, we should get rid of SLAB_KASAN altogether. It's absolutely useless, and only complicates
the code. And if we don't fit in KMALLOC_MAX_SIZE, just don't create cache.

>  	    sizeof(struct kasan_alloc_meta) -
>  	    sizeof(struct kasan_free_meta))
>  		return;
> +#endif
>  	*flags |= SLAB_KASAN;
> +
>  	/* Add alloc meta. */
>  	cache->kasan_info.alloc_meta_offset = *size;
>  	*size += sizeof(struct kasan_alloc_meta);
> @@ -392,17 +395,37 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
>  	    cache->object_size < sizeof(struct kasan_free_meta)) {
>  		cache->kasan_info.free_meta_offset = *size;
>  		*size += sizeof(struct kasan_free_meta);
> +	} else {
> +		cache->kasan_info.free_meta_offset = 0;
>  	}
>  	redzone_adjust = optimal_redzone(cache->object_size) -
>  		(*size - cache->object_size);
> +
>  	if (redzone_adjust > 0)
>  		*size += redzone_adjust;
> +
> +#ifdef CONFIG_SLAB
>  	*size = min(KMALLOC_MAX_CACHE_SIZE,
>  		    max(*size,
>  			cache->object_size +
>  			optimal_redzone(cache->object_size)));
> -}
> +	/*
> +	 * If the metadata doesn't fit, disable KASAN at all.
> +	 */
> +	if (*size <= cache->kasan_info.alloc_meta_offset ||
> +			*size <= cache->kasan_info.free_meta_offset) {
> +		*flags &= ~SLAB_KASAN;
> +		*size = orig_size;
> +		cache->kasan_info.alloc_meta_offset = -1;
> +		cache->kasan_info.free_meta_offset = -1;
> +	}
> +#else
> +	*size = max(*size,
> +			cache->object_size +
> +			optimal_redzone(cache->object_size));
> +
>  #endif
> +}
>  
>  void kasan_cache_shrink(struct kmem_cache *cache)
>  {
> @@ -431,16 +454,14 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
>  	kasan_poison_shadow(object,
>  			round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
>  			KASAN_KMALLOC_REDZONE);
> -#ifdef CONFIG_SLAB
>  	if (cache->flags & SLAB_KASAN) {
>  		struct kasan_alloc_meta *alloc_info =
>  			get_alloc_info(cache, object);
> -		alloc_info->state = KASAN_STATE_INIT;
> +		if (alloc_info)

If I read the code right alloc_info can be NULL only if SLAB_KASAN is set.


> +			alloc_info->state = KASAN_STATE_INIT;
>  	}
> -#endif
>  }
>  
> -#ifdef CONFIG_SLAB
>  static inline int in_irqentry_text(unsigned long ptr)
>  {
>  	return (ptr >= (unsigned long)&__irqentry_text_start &&
> @@ -492,6 +513,8 @@ struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
>  					const void *object)
>  {
>  	BUILD_BUG_ON(sizeof(struct kasan_alloc_meta) > 32);
> +	if (cache->kasan_info.alloc_meta_offset == -1)
> +		return NULL;

What's the point of this ? This should be always false.

>  	return (void *)object + cache->kasan_info.alloc_meta_offset;
>  }
>  
> @@ -499,9 +522,10 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
>  				      const void *object)
>  {
>  	BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
> +	if (cache->kasan_info.free_meta_offset == -1)
> +		return NULL;
>  	return (void *)object + cache->kasan_info.free_meta_offset;
>  }
> -#endif
>  
>  void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
>  {
> @@ -522,7 +546,6 @@ void kasan_poison_slab_free(struct kmem_cache *cache, void *object)
>  
>  bool kasan_slab_free(struct kmem_cache *cache, void *object)
>  {
> -#ifdef CONFIG_SLAB
>  	/* RCU slabs could be legally used after free within the RCU period */
>  	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
>  		return false;
> @@ -532,7 +555,10 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>  			get_alloc_info(cache, object);
>  		struct kasan_free_meta *free_info =
>  			get_free_info(cache, object);
> -
> +		WARN_ON(!alloc_info);
> +		WARN_ON(!free_info);
> +		if (!alloc_info || !free_info)
> +			return;

Again, never possible.


>  		switch (alloc_info->state) {
>  		case KASAN_STATE_ALLOC:
>  			alloc_info->state = KASAN_STATE_QUARANTINE;
> @@ -550,10 +576,6 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>  		}
>  	}
>  	return false;
> -#else
> -	kasan_poison_slab_free(cache, object);
> -	return false;
> -#endif
>  }
>  
>  void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
> @@ -568,24 +590,29 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>  	if (unlikely(object == NULL))
>  		return;
>  
> +	if (!(cache->flags & SLAB_KASAN))
> +		return;
> +
>  	redzone_start = round_up((unsigned long)(object + size),
>  				KASAN_SHADOW_SCALE_SIZE);
>  	redzone_end = round_up((unsigned long)object + cache->object_size,
>  				KASAN_SHADOW_SCALE_SIZE);
>  
>  	kasan_unpoison_shadow(object, size);
> +	WARN_ON(redzone_start > redzone_end);
> +	if (redzone_start > redzone_end)

How that's can happen? 

> +		return;
>  	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>  		KASAN_KMALLOC_REDZONE);
> -#ifdef CONFIG_SLAB
>  	if (cache->flags & SLAB_KASAN) {
>  		struct kasan_alloc_meta *alloc_info =
>  			get_alloc_info(cache, object);
> -
> -		alloc_info->state = KASAN_STATE_ALLOC;
> -		alloc_info->alloc_size = size;
> -		set_track(&alloc_info->track, flags);
> +		if (alloc_info) {

And again...


> +			alloc_info->state = KASAN_STATE_ALLOC;
> +			alloc_info->alloc_size = size;
> +			set_track(&alloc_info->track, flags);
> +		}
>  	}
> -#endif
>  }
>  EXPORT_SYMBOL(kasan_kmalloc);
>  


[..]

> diff --git a/mm/slab.h b/mm/slab.h
> index dedb1a9..fde1fea 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -366,6 +366,10 @@ static inline size_t slab_ksize(const struct kmem_cache *s)
>  	if (s->flags & (SLAB_RED_ZONE | SLAB_POISON))
>  		return s->object_size;
>  # endif
> +# ifdef CONFIG_KASAN

Gush, you love ifdefs, don't you? Hint: it's redundant here.

> +	if (s->flags & SLAB_KASAN)
> +		return s->object_size;
> +# endif
>  	/*
>  	 * If we have the need to store the freelist pointer
...

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB
  2016-06-09 16:45 ` Andrey Ryabinin
@ 2016-06-09 18:22   ` Alexander Potapenko
  2016-06-15 13:37     ` Alexander Potapenko
  2016-06-15 15:29     ` Alexander Potapenko
  0 siblings, 2 replies; 8+ messages in thread
From: Alexander Potapenko @ 2016-06-09 18:22 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Konovalov, Christoph Lameter, Dmitriy Vyukov,
	Andrew Morton, Steven Rostedt, Joonsoo Kim, Joonsoo Kim,
	Kostya Serebryany, Kuthonuzo Luruo, kasan-dev,
	Linux Memory Management List, LKML

On Thu, Jun 9, 2016 at 6:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>
>
> On 06/08/2016 09:40 PM, Alexander Potapenko wrote:
>> For KASAN builds:
>>  - switch SLUB allocator to using stackdepot instead of storing the
>>    allocation/deallocation stacks in the objects;
>>  - define SLAB_RED_ZONE, SLAB_POISON, SLAB_STORE_USER to zero,
>>    effectively disabling these debug features, as they're redundant in
>>    the presence of KASAN;
>
> Instead of having duplicated functionality, I think it might be better to switch SLAB_STORE_USER to stackdepot instead.
> Because now, we have two piles of code which do basically the same thing, but
> do differently.
Fine, I'll try that out.
>>  - refactor the slab freelist hook, put freed memory into the quarantine.
>>
>
> What you did with slab_freelist_hook() is not refactoring, it's an obfuscation.
Whatever you call it.
The problem is that if a list of heterogeneous objects is passed into
slab_free_freelist_hook(), some of them may end up in the quarantine,
while others will not.
Therefore we need to filter that list and remove the objects that
don't need to be freed from it.
>
>>  }
>>
>> -#ifdef CONFIG_SLAB
>>  /*
>>   * Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
>>   * For larger allocations larger redzones are used.
>> @@ -372,17 +371,21 @@ static size_t optimal_redzone(size_t object_size)
>>  void kasan_cache_create(struct kmem_cache *cache, size_t *size,
>>                       unsigned long *flags)
>>  {
>> -     int redzone_adjust;
>> -     /* Make sure the adjusted size is still less than
>> -      * KMALLOC_MAX_CACHE_SIZE.
>> -      * TODO: this check is only useful for SLAB, but not SLUB. We'll need
>> -      * to skip it for SLUB when it starts using kasan_cache_create().
>> +     int redzone_adjust, orig_size = *size;
>> +
>> +#ifdef CONFIG_SLAB
>> +     /*
>> +      * Make sure the adjusted size is still less than
>> +      * KMALLOC_MAX_CACHE_SIZE, i.e. we don't use the page allocator.
>>        */
>> +
>>       if (*size > KMALLOC_MAX_CACHE_SIZE -
>
> This is wrong. You probably wanted KMALLOC_MAX_SIZE here.
Yeah, sonds right.
> However, we should get rid of SLAB_KASAN altogether. It's absolutely useless, and only complicates
> the code. And if we don't fit in KMALLOC_MAX_SIZE, just don't create cache.
Thanks, I'll look into this. Looks like you are right, once we remove
this check every existing cache will have SLAB_KASAN set.
It's handy for debugging, but not really needed.
>
>>           sizeof(struct kasan_alloc_meta) -
>>           sizeof(struct kasan_free_meta))
>>               return;
>> +#endif
>>       *flags |= SLAB_KASAN;
>> +
>>       /* Add alloc meta. */
>>       cache->kasan_info.alloc_meta_offset = *size;
>>       *size += sizeof(struct kasan_alloc_meta);
>> @@ -392,17 +395,37 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
>>           cache->object_size < sizeof(struct kasan_free_meta)) {
>>               cache->kasan_info.free_meta_offset = *size;
>>               *size += sizeof(struct kasan_free_meta);
>> +     } else {
>> +             cache->kasan_info.free_meta_offset = 0;
>>       }
>>       redzone_adjust = optimal_redzone(cache->object_size) -
>>               (*size - cache->object_size);
>> +
>>       if (redzone_adjust > 0)
>>               *size += redzone_adjust;
>> +
>> +#ifdef CONFIG_SLAB
>>       *size = min(KMALLOC_MAX_CACHE_SIZE,
>>                   max(*size,
>>                       cache->object_size +
>>                       optimal_redzone(cache->object_size)));
>> -}
>> +     /*
>> +      * If the metadata doesn't fit, disable KASAN at all.
>> +      */
>> +     if (*size <= cache->kasan_info.alloc_meta_offset ||
>> +                     *size <= cache->kasan_info.free_meta_offset) {
>> +             *flags &= ~SLAB_KASAN;
>> +             *size = orig_size;
>> +             cache->kasan_info.alloc_meta_offset = -1;
>> +             cache->kasan_info.free_meta_offset = -1;
>> +     }
>> +#else
>> +     *size = max(*size,
>> +                     cache->object_size +
>> +                     optimal_redzone(cache->object_size));
>> +
>>  #endif
>> +}
>>
>>  void kasan_cache_shrink(struct kmem_cache *cache)
>>  {
>> @@ -431,16 +454,14 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
>>       kasan_poison_shadow(object,
>>                       round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
>>                       KASAN_KMALLOC_REDZONE);
>> -#ifdef CONFIG_SLAB
>>       if (cache->flags & SLAB_KASAN) {
>>               struct kasan_alloc_meta *alloc_info =
>>                       get_alloc_info(cache, object);
>> -             alloc_info->state = KASAN_STATE_INIT;
>> +             if (alloc_info)
>
> If I read the code right alloc_info can be NULL only if SLAB_KASAN is set.
This has been left over from tracking down some nasty bugs, but, yes,
we can assume alloc_info and free_info are always valid.
>
>> +                     alloc_info->state = KASAN_STATE_INIT;
>>       }
>> -#endif
>>  }
>>
>> -#ifdef CONFIG_SLAB
>>  static inline int in_irqentry_text(unsigned long ptr)
>>  {
>>       return (ptr >= (unsigned long)&__irqentry_text_start &&
>> @@ -492,6 +513,8 @@ struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
>>                                       const void *object)
>>  {
>>       BUILD_BUG_ON(sizeof(struct kasan_alloc_meta) > 32);
>> +     if (cache->kasan_info.alloc_meta_offset == -1)
>> +             return NULL;
>
> What's the point of this ? This should be always false.
Agreed, will remove this (and other similar cases).
>>       return (void *)object + cache->kasan_info.alloc_meta_offset;
>>  }
>>
>> @@ -499,9 +522,10 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
>>                                     const void *object)
>>  {
>>       BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
>> +     if (cache->kasan_info.free_meta_offset == -1)
>> +             return NULL;
>>       return (void *)object + cache->kasan_info.free_meta_offset;
>>  }
>> -#endif
>>
>>  void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
>>  {
>> @@ -522,7 +546,6 @@ void kasan_poison_slab_free(struct kmem_cache *cache, void *object)
>>
>>  bool kasan_slab_free(struct kmem_cache *cache, void *object)
>>  {
>> -#ifdef CONFIG_SLAB
>>       /* RCU slabs could be legally used after free within the RCU period */
>>       if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
>>               return false;
>> @@ -532,7 +555,10 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>>                       get_alloc_info(cache, object);
>>               struct kasan_free_meta *free_info =
>>                       get_free_info(cache, object);
>> -
>> +             WARN_ON(!alloc_info);
>> +             WARN_ON(!free_info);
>> +             if (!alloc_info || !free_info)
>> +                     return;
>
> Again, never possible.
>
>
>>               switch (alloc_info->state) {
>>               case KASAN_STATE_ALLOC:
>>                       alloc_info->state = KASAN_STATE_QUARANTINE;
>> @@ -550,10 +576,6 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>>               }
>>       }
>>       return false;
>> -#else
>> -     kasan_poison_slab_free(cache, object);
>> -     return false;
>> -#endif
>>  }
>>
>>  void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>> @@ -568,24 +590,29 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>>       if (unlikely(object == NULL))
>>               return;
>>
>> +     if (!(cache->flags & SLAB_KASAN))
>> +             return;
>> +
>>       redzone_start = round_up((unsigned long)(object + size),
>>                               KASAN_SHADOW_SCALE_SIZE);
>>       redzone_end = round_up((unsigned long)object + cache->object_size,
>>                               KASAN_SHADOW_SCALE_SIZE);
>>
>>       kasan_unpoison_shadow(object, size);
>> +     WARN_ON(redzone_start > redzone_end);
>> +     if (redzone_start > redzone_end)
>
> How that's can happen?
This was possible because of incorrect ksize implementation, should be
now ok. Removed.
>> +             return;
>>       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>>               KASAN_KMALLOC_REDZONE);
>> -#ifdef CONFIG_SLAB
>>       if (cache->flags & SLAB_KASAN) {
>>               struct kasan_alloc_meta *alloc_info =
>>                       get_alloc_info(cache, object);
>> -
>> -             alloc_info->state = KASAN_STATE_ALLOC;
>> -             alloc_info->alloc_size = size;
>> -             set_track(&alloc_info->track, flags);
>> +             if (alloc_info) {
>
> And again...
>
>
>> +                     alloc_info->state = KASAN_STATE_ALLOC;
>> +                     alloc_info->alloc_size = size;
>> +                     set_track(&alloc_info->track, flags);
>> +             }
>>       }
>> -#endif
>>  }
>>  EXPORT_SYMBOL(kasan_kmalloc);
>>
>
>
> [..]
>
>> diff --git a/mm/slab.h b/mm/slab.h
>> index dedb1a9..fde1fea 100644
>> --- a/mm/slab.h
>> +++ b/mm/slab.h
>> @@ -366,6 +366,10 @@ static inline size_t slab_ksize(const struct kmem_cache *s)
>>       if (s->flags & (SLAB_RED_ZONE | SLAB_POISON))
>>               return s->object_size;
>>  # endif
>> +# ifdef CONFIG_KASAN
>
> Gush, you love ifdefs, don't you? Hint: it's redundant here.
>
>> +     if (s->flags & SLAB_KASAN)
>> +             return s->object_size;
>> +# endif
>>       /*
>>        * If we have the need to store the freelist pointer
> ...



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB
  2016-06-09 18:22   ` Alexander Potapenko
@ 2016-06-15 13:37     ` Alexander Potapenko
  2016-06-15 15:29     ` Alexander Potapenko
  1 sibling, 0 replies; 8+ messages in thread
From: Alexander Potapenko @ 2016-06-15 13:37 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Konovalov, Christoph Lameter, Dmitriy Vyukov,
	Andrew Morton, Steven Rostedt, Joonsoo Kim, Joonsoo Kim,
	Kostya Serebryany, Kuthonuzo Luruo, kasan-dev,
	Linux Memory Management List, LKML

On Thu, Jun 9, 2016 at 8:22 PM, Alexander Potapenko <glider@google.com> wrote:
> On Thu, Jun 9, 2016 at 6:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>
>>
>> On 06/08/2016 09:40 PM, Alexander Potapenko wrote:
>>> For KASAN builds:
>>>  - switch SLUB allocator to using stackdepot instead of storing the
>>>    allocation/deallocation stacks in the objects;
>>>  - define SLAB_RED_ZONE, SLAB_POISON, SLAB_STORE_USER to zero,
>>>    effectively disabling these debug features, as they're redundant in
>>>    the presence of KASAN;
>>
>> Instead of having duplicated functionality, I think it might be better to switch SLAB_STORE_USER to stackdepot instead.
>> Because now, we have two piles of code which do basically the same thing, but
>> do differently.
> Fine, I'll try that out.
>>>  - refactor the slab freelist hook, put freed memory into the quarantine.
>>>
>>
>> What you did with slab_freelist_hook() is not refactoring, it's an obfuscation.
> Whatever you call it.
> The problem is that if a list of heterogeneous objects is passed into
> slab_free_freelist_hook(), some of them may end up in the quarantine,
> while others will not.
> Therefore we need to filter that list and remove the objects that
> don't need to be freed from it.
>>
>>>  }
>>>
>>> -#ifdef CONFIG_SLAB
>>>  /*
>>>   * Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
>>>   * For larger allocations larger redzones are used.
>>> @@ -372,17 +371,21 @@ static size_t optimal_redzone(size_t object_size)
>>>  void kasan_cache_create(struct kmem_cache *cache, size_t *size,
>>>                       unsigned long *flags)
>>>  {
>>> -     int redzone_adjust;
>>> -     /* Make sure the adjusted size is still less than
>>> -      * KMALLOC_MAX_CACHE_SIZE.
>>> -      * TODO: this check is only useful for SLAB, but not SLUB. We'll need
>>> -      * to skip it for SLUB when it starts using kasan_cache_create().
>>> +     int redzone_adjust, orig_size = *size;
>>> +
>>> +#ifdef CONFIG_SLAB
>>> +     /*
>>> +      * Make sure the adjusted size is still less than
>>> +      * KMALLOC_MAX_CACHE_SIZE, i.e. we don't use the page allocator.
>>>        */
>>> +
>>>       if (*size > KMALLOC_MAX_CACHE_SIZE -
>>
>> This is wrong. You probably wanted KMALLOC_MAX_SIZE here.
> Yeah, sonds right.
>> However, we should get rid of SLAB_KASAN altogether. It's absolutely useless, and only complicates
>> the code. And if we don't fit in KMALLOC_MAX_SIZE, just don't create cache.
> Thanks, I'll look into this. Looks like you are right, once we remove
> this check every existing cache will have SLAB_KASAN set.
> It's handy for debugging, but not really needed.
Turns out we cannot just skip cache creation if we don't fit in
KMALLOC_MAX_SIZE, because SLAB, among others, creates kmalloc-4194304.
So we must keep an attribute that denotes that this is a KASAN-enabled
slab (i.e. the SLAB_KASAN flag), or set alloc_meta_offset and
free_meta_offset to -1, which isn't so convenient.
I suggest we stick with SLAB_KASAN.
>>
>>>           sizeof(struct kasan_alloc_meta) -
>>>           sizeof(struct kasan_free_meta))
>>>               return;
>>> +#endif
>>>       *flags |= SLAB_KASAN;
>>> +
>>>       /* Add alloc meta. */
>>>       cache->kasan_info.alloc_meta_offset = *size;
>>>       *size += sizeof(struct kasan_alloc_meta);
>>> @@ -392,17 +395,37 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
>>>           cache->object_size < sizeof(struct kasan_free_meta)) {
>>>               cache->kasan_info.free_meta_offset = *size;
>>>               *size += sizeof(struct kasan_free_meta);
>>> +     } else {
>>> +             cache->kasan_info.free_meta_offset = 0;
>>>       }
>>>       redzone_adjust = optimal_redzone(cache->object_size) -
>>>               (*size - cache->object_size);
>>> +
>>>       if (redzone_adjust > 0)
>>>               *size += redzone_adjust;
>>> +
>>> +#ifdef CONFIG_SLAB
>>>       *size = min(KMALLOC_MAX_CACHE_SIZE,
>>>                   max(*size,
>>>                       cache->object_size +
>>>                       optimal_redzone(cache->object_size)));
>>> -}
>>> +     /*
>>> +      * If the metadata doesn't fit, disable KASAN at all.
>>> +      */
>>> +     if (*size <= cache->kasan_info.alloc_meta_offset ||
>>> +                     *size <= cache->kasan_info.free_meta_offset) {
>>> +             *flags &= ~SLAB_KASAN;
>>> +             *size = orig_size;
>>> +             cache->kasan_info.alloc_meta_offset = -1;
>>> +             cache->kasan_info.free_meta_offset = -1;
>>> +     }
>>> +#else
>>> +     *size = max(*size,
>>> +                     cache->object_size +
>>> +                     optimal_redzone(cache->object_size));
>>> +
>>>  #endif
>>> +}
>>>
>>>  void kasan_cache_shrink(struct kmem_cache *cache)
>>>  {
>>> @@ -431,16 +454,14 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
>>>       kasan_poison_shadow(object,
>>>                       round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
>>>                       KASAN_KMALLOC_REDZONE);
>>> -#ifdef CONFIG_SLAB
>>>       if (cache->flags & SLAB_KASAN) {
>>>               struct kasan_alloc_meta *alloc_info =
>>>                       get_alloc_info(cache, object);
>>> -             alloc_info->state = KASAN_STATE_INIT;
>>> +             if (alloc_info)
>>
>> If I read the code right alloc_info can be NULL only if SLAB_KASAN is set.
> This has been left over from tracking down some nasty bugs, but, yes,
> we can assume alloc_info and free_info are always valid.
>>
>>> +                     alloc_info->state = KASAN_STATE_INIT;
>>>       }
>>> -#endif
>>>  }
>>>
>>> -#ifdef CONFIG_SLAB
>>>  static inline int in_irqentry_text(unsigned long ptr)
>>>  {
>>>       return (ptr >= (unsigned long)&__irqentry_text_start &&
>>> @@ -492,6 +513,8 @@ struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
>>>                                       const void *object)
>>>  {
>>>       BUILD_BUG_ON(sizeof(struct kasan_alloc_meta) > 32);
>>> +     if (cache->kasan_info.alloc_meta_offset == -1)
>>> +             return NULL;
>>
>> What's the point of this ? This should be always false.
> Agreed, will remove this (and other similar cases).
>>>       return (void *)object + cache->kasan_info.alloc_meta_offset;
>>>  }
>>>
>>> @@ -499,9 +522,10 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
>>>                                     const void *object)
>>>  {
>>>       BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
>>> +     if (cache->kasan_info.free_meta_offset == -1)
>>> +             return NULL;
>>>       return (void *)object + cache->kasan_info.free_meta_offset;
>>>  }
>>> -#endif
>>>
>>>  void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
>>>  {
>>> @@ -522,7 +546,6 @@ void kasan_poison_slab_free(struct kmem_cache *cache, void *object)
>>>
>>>  bool kasan_slab_free(struct kmem_cache *cache, void *object)
>>>  {
>>> -#ifdef CONFIG_SLAB
>>>       /* RCU slabs could be legally used after free within the RCU period */
>>>       if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
>>>               return false;
>>> @@ -532,7 +555,10 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>>>                       get_alloc_info(cache, object);
>>>               struct kasan_free_meta *free_info =
>>>                       get_free_info(cache, object);
>>> -
>>> +             WARN_ON(!alloc_info);
>>> +             WARN_ON(!free_info);
>>> +             if (!alloc_info || !free_info)
>>> +                     return;
>>
>> Again, never possible.
>>
>>
>>>               switch (alloc_info->state) {
>>>               case KASAN_STATE_ALLOC:
>>>                       alloc_info->state = KASAN_STATE_QUARANTINE;
>>> @@ -550,10 +576,6 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>>>               }
>>>       }
>>>       return false;
>>> -#else
>>> -     kasan_poison_slab_free(cache, object);
>>> -     return false;
>>> -#endif
>>>  }
>>>
>>>  void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>>> @@ -568,24 +590,29 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>>>       if (unlikely(object == NULL))
>>>               return;
>>>
>>> +     if (!(cache->flags & SLAB_KASAN))
>>> +             return;
>>> +
>>>       redzone_start = round_up((unsigned long)(object + size),
>>>                               KASAN_SHADOW_SCALE_SIZE);
>>>       redzone_end = round_up((unsigned long)object + cache->object_size,
>>>                               KASAN_SHADOW_SCALE_SIZE);
>>>
>>>       kasan_unpoison_shadow(object, size);
>>> +     WARN_ON(redzone_start > redzone_end);
>>> +     if (redzone_start > redzone_end)
>>
>> How that's can happen?
> This was possible because of incorrect ksize implementation, should be
> now ok. Removed.
>>> +             return;
>>>       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>>>               KASAN_KMALLOC_REDZONE);
>>> -#ifdef CONFIG_SLAB
>>>       if (cache->flags & SLAB_KASAN) {
>>>               struct kasan_alloc_meta *alloc_info =
>>>                       get_alloc_info(cache, object);
>>> -
>>> -             alloc_info->state = KASAN_STATE_ALLOC;
>>> -             alloc_info->alloc_size = size;
>>> -             set_track(&alloc_info->track, flags);
>>> +             if (alloc_info) {
>>
>> And again...
>>
>>
>>> +                     alloc_info->state = KASAN_STATE_ALLOC;
>>> +                     alloc_info->alloc_size = size;
>>> +                     set_track(&alloc_info->track, flags);
>>> +             }
>>>       }
>>> -#endif
>>>  }
>>>  EXPORT_SYMBOL(kasan_kmalloc);
>>>
>>
>>
>> [..]
>>
>>> diff --git a/mm/slab.h b/mm/slab.h
>>> index dedb1a9..fde1fea 100644
>>> --- a/mm/slab.h
>>> +++ b/mm/slab.h
>>> @@ -366,6 +366,10 @@ static inline size_t slab_ksize(const struct kmem_cache *s)
>>>       if (s->flags & (SLAB_RED_ZONE | SLAB_POISON))
>>>               return s->object_size;
>>>  # endif
>>> +# ifdef CONFIG_KASAN
>>
>> Gush, you love ifdefs, don't you? Hint: it's redundant here.
>>
>>> +     if (s->flags & SLAB_KASAN)
>>> +             return s->object_size;
>>> +# endif
>>>       /*
>>>        * If we have the need to store the freelist pointer
>> ...
>
>
>
> --
> Alexander Potapenko
> Software Engineer
>
> Google Germany GmbH
> Erika-Mann-Straße, 33
> 80636 München
>
> Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
> Registergericht und -nummer: Hamburg, HRB 86891
> Sitz der Gesellschaft: Hamburg



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB
  2016-06-09 18:22   ` Alexander Potapenko
  2016-06-15 13:37     ` Alexander Potapenko
@ 2016-06-15 15:29     ` Alexander Potapenko
  1 sibling, 0 replies; 8+ messages in thread
From: Alexander Potapenko @ 2016-06-15 15:29 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrey Konovalov, Christoph Lameter, Dmitriy Vyukov,
	Andrew Morton, Steven Rostedt, Joonsoo Kim, Joonsoo Kim,
	Kostya Serebryany, Kuthonuzo Luruo, kasan-dev,
	Linux Memory Management List, LKML

On Thu, Jun 9, 2016 at 8:22 PM, Alexander Potapenko <glider@google.com> wrote:
> On Thu, Jun 9, 2016 at 6:45 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>
>>
>> On 06/08/2016 09:40 PM, Alexander Potapenko wrote:
>>> For KASAN builds:
>>>  - switch SLUB allocator to using stackdepot instead of storing the
>>>    allocation/deallocation stacks in the objects;
>>>  - define SLAB_RED_ZONE, SLAB_POISON, SLAB_STORE_USER to zero,
>>>    effectively disabling these debug features, as they're redundant in
>>>    the presence of KASAN;
>>
>> Instead of having duplicated functionality, I think it might be better to switch SLAB_STORE_USER to stackdepot instead.
>> Because now, we have two piles of code which do basically the same thing, but
>> do differently.
> Fine, I'll try that out.
After thinking for a while I agree that switching SLAB_STORE_USER to
stackdepot is a nice thing to do, but it is irrelevant to this CL.
Since this may affect SLAB code as well, I'd prefer making a separate
change for that.
>>>  - refactor the slab freelist hook, put freed memory into the quarantine.
>>>
>>
>> What you did with slab_freelist_hook() is not refactoring, it's an obfuscation.
> Whatever you call it.
> The problem is that if a list of heterogeneous objects is passed into
> slab_free_freelist_hook(), some of them may end up in the quarantine,
> while others will not.
> Therefore we need to filter that list and remove the objects that
> don't need to be freed from it.
>>
>>>  }
>>>
>>> -#ifdef CONFIG_SLAB
>>>  /*
>>>   * Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
>>>   * For larger allocations larger redzones are used.
>>> @@ -372,17 +371,21 @@ static size_t optimal_redzone(size_t object_size)
>>>  void kasan_cache_create(struct kmem_cache *cache, size_t *size,
>>>                       unsigned long *flags)
>>>  {
>>> -     int redzone_adjust;
>>> -     /* Make sure the adjusted size is still less than
>>> -      * KMALLOC_MAX_CACHE_SIZE.
>>> -      * TODO: this check is only useful for SLAB, but not SLUB. We'll need
>>> -      * to skip it for SLUB when it starts using kasan_cache_create().
>>> +     int redzone_adjust, orig_size = *size;
>>> +
>>> +#ifdef CONFIG_SLAB
>>> +     /*
>>> +      * Make sure the adjusted size is still less than
>>> +      * KMALLOC_MAX_CACHE_SIZE, i.e. we don't use the page allocator.
>>>        */
>>> +
>>>       if (*size > KMALLOC_MAX_CACHE_SIZE -
>>
>> This is wrong. You probably wanted KMALLOC_MAX_SIZE here.
> Yeah, sonds right.
>> However, we should get rid of SLAB_KASAN altogether. It's absolutely useless, and only complicates
>> the code. And if we don't fit in KMALLOC_MAX_SIZE, just don't create cache.
> Thanks, I'll look into this. Looks like you are right, once we remove
> this check every existing cache will have SLAB_KASAN set.
> It's handy for debugging, but not really needed.
>>
>>>           sizeof(struct kasan_alloc_meta) -
>>>           sizeof(struct kasan_free_meta))
>>>               return;
>>> +#endif
>>>       *flags |= SLAB_KASAN;
>>> +
>>>       /* Add alloc meta. */
>>>       cache->kasan_info.alloc_meta_offset = *size;
>>>       *size += sizeof(struct kasan_alloc_meta);
>>> @@ -392,17 +395,37 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
>>>           cache->object_size < sizeof(struct kasan_free_meta)) {
>>>               cache->kasan_info.free_meta_offset = *size;
>>>               *size += sizeof(struct kasan_free_meta);
>>> +     } else {
>>> +             cache->kasan_info.free_meta_offset = 0;
>>>       }
>>>       redzone_adjust = optimal_redzone(cache->object_size) -
>>>               (*size - cache->object_size);
>>> +
>>>       if (redzone_adjust > 0)
>>>               *size += redzone_adjust;
>>> +
>>> +#ifdef CONFIG_SLAB
>>>       *size = min(KMALLOC_MAX_CACHE_SIZE,
>>>                   max(*size,
>>>                       cache->object_size +
>>>                       optimal_redzone(cache->object_size)));
>>> -}
>>> +     /*
>>> +      * If the metadata doesn't fit, disable KASAN at all.
>>> +      */
>>> +     if (*size <= cache->kasan_info.alloc_meta_offset ||
>>> +                     *size <= cache->kasan_info.free_meta_offset) {
>>> +             *flags &= ~SLAB_KASAN;
>>> +             *size = orig_size;
>>> +             cache->kasan_info.alloc_meta_offset = -1;
>>> +             cache->kasan_info.free_meta_offset = -1;
>>> +     }
>>> +#else
>>> +     *size = max(*size,
>>> +                     cache->object_size +
>>> +                     optimal_redzone(cache->object_size));
>>> +
>>>  #endif
>>> +}
>>>
>>>  void kasan_cache_shrink(struct kmem_cache *cache)
>>>  {
>>> @@ -431,16 +454,14 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
>>>       kasan_poison_shadow(object,
>>>                       round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
>>>                       KASAN_KMALLOC_REDZONE);
>>> -#ifdef CONFIG_SLAB
>>>       if (cache->flags & SLAB_KASAN) {
>>>               struct kasan_alloc_meta *alloc_info =
>>>                       get_alloc_info(cache, object);
>>> -             alloc_info->state = KASAN_STATE_INIT;
>>> +             if (alloc_info)
>>
>> If I read the code right alloc_info can be NULL only if SLAB_KASAN is set.
> This has been left over from tracking down some nasty bugs, but, yes,
> we can assume alloc_info and free_info are always valid.
>>
>>> +                     alloc_info->state = KASAN_STATE_INIT;
>>>       }
>>> -#endif
>>>  }
>>>
>>> -#ifdef CONFIG_SLAB
>>>  static inline int in_irqentry_text(unsigned long ptr)
>>>  {
>>>       return (ptr >= (unsigned long)&__irqentry_text_start &&
>>> @@ -492,6 +513,8 @@ struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
>>>                                       const void *object)
>>>  {
>>>       BUILD_BUG_ON(sizeof(struct kasan_alloc_meta) > 32);
>>> +     if (cache->kasan_info.alloc_meta_offset == -1)
>>> +             return NULL;
>>
>> What's the point of this ? This should be always false.
> Agreed, will remove this (and other similar cases).
>>>       return (void *)object + cache->kasan_info.alloc_meta_offset;
>>>  }
>>>
>>> @@ -499,9 +522,10 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
>>>                                     const void *object)
>>>  {
>>>       BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
>>> +     if (cache->kasan_info.free_meta_offset == -1)
>>> +             return NULL;
>>>       return (void *)object + cache->kasan_info.free_meta_offset;
>>>  }
>>> -#endif
>>>
>>>  void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
>>>  {
>>> @@ -522,7 +546,6 @@ void kasan_poison_slab_free(struct kmem_cache *cache, void *object)
>>>
>>>  bool kasan_slab_free(struct kmem_cache *cache, void *object)
>>>  {
>>> -#ifdef CONFIG_SLAB
>>>       /* RCU slabs could be legally used after free within the RCU period */
>>>       if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
>>>               return false;
>>> @@ -532,7 +555,10 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>>>                       get_alloc_info(cache, object);
>>>               struct kasan_free_meta *free_info =
>>>                       get_free_info(cache, object);
>>> -
>>> +             WARN_ON(!alloc_info);
>>> +             WARN_ON(!free_info);
>>> +             if (!alloc_info || !free_info)
>>> +                     return;
>>
>> Again, never possible.
>>
>>
>>>               switch (alloc_info->state) {
>>>               case KASAN_STATE_ALLOC:
>>>                       alloc_info->state = KASAN_STATE_QUARANTINE;
>>> @@ -550,10 +576,6 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
>>>               }
>>>       }
>>>       return false;
>>> -#else
>>> -     kasan_poison_slab_free(cache, object);
>>> -     return false;
>>> -#endif
>>>  }
>>>
>>>  void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>>> @@ -568,24 +590,29 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
>>>       if (unlikely(object == NULL))
>>>               return;
>>>
>>> +     if (!(cache->flags & SLAB_KASAN))
>>> +             return;
>>> +
>>>       redzone_start = round_up((unsigned long)(object + size),
>>>                               KASAN_SHADOW_SCALE_SIZE);
>>>       redzone_end = round_up((unsigned long)object + cache->object_size,
>>>                               KASAN_SHADOW_SCALE_SIZE);
>>>
>>>       kasan_unpoison_shadow(object, size);
>>> +     WARN_ON(redzone_start > redzone_end);
>>> +     if (redzone_start > redzone_end)
>>
>> How that's can happen?
> This was possible because of incorrect ksize implementation, should be
> now ok. Removed.
>>> +             return;
>>>       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>>>               KASAN_KMALLOC_REDZONE);
>>> -#ifdef CONFIG_SLAB
>>>       if (cache->flags & SLAB_KASAN) {
>>>               struct kasan_alloc_meta *alloc_info =
>>>                       get_alloc_info(cache, object);
>>> -
>>> -             alloc_info->state = KASAN_STATE_ALLOC;
>>> -             alloc_info->alloc_size = size;
>>> -             set_track(&alloc_info->track, flags);
>>> +             if (alloc_info) {
>>
>> And again...
>>
>>
>>> +                     alloc_info->state = KASAN_STATE_ALLOC;
>>> +                     alloc_info->alloc_size = size;
>>> +                     set_track(&alloc_info->track, flags);
>>> +             }
>>>       }
>>> -#endif
>>>  }
>>>  EXPORT_SYMBOL(kasan_kmalloc);
>>>
>>
>>
>> [..]
>>
>>> diff --git a/mm/slab.h b/mm/slab.h
>>> index dedb1a9..fde1fea 100644
>>> --- a/mm/slab.h
>>> +++ b/mm/slab.h
>>> @@ -366,6 +366,10 @@ static inline size_t slab_ksize(const struct kmem_cache *s)
>>>       if (s->flags & (SLAB_RED_ZONE | SLAB_POISON))
>>>               return s->object_size;
>>>  # endif
>>> +# ifdef CONFIG_KASAN
>>
>> Gush, you love ifdefs, don't you? Hint: it's redundant here.
>>
>>> +     if (s->flags & SLAB_KASAN)
>>> +             return s->object_size;
>>> +# endif
>>>       /*
>>>        * If we have the need to store the freelist pointer
>> ...
>
>
>
> --
> Alexander Potapenko
> Software Engineer
>
> Google Germany GmbH
> Erika-Mann-Straße, 33
> 80636 München
>
> Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
> Registergericht und -nummer: Hamburg, HRB 86891
> Sitz der Gesellschaft: Hamburg



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-06-15 15:30 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-08 18:40 [PATCH] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB Alexander Potapenko
2016-06-08 18:08 ` kbuild test robot
2016-06-08 19:02 ` kbuild test robot
2016-06-08 19:23 ` kbuild test robot
2016-06-09 16:45 ` Andrey Ryabinin
2016-06-09 18:22   ` Alexander Potapenko
2016-06-15 13:37     ` Alexander Potapenko
2016-06-15 15:29     ` Alexander Potapenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).