linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/2] mm, kasan: stackdepot and quarantine for SLUB
@ 2016-07-12 18:12 Alexander Potapenko
  2016-07-12 18:12 ` [PATCH v7 1/2] mm, kasan: account for object redzone in SLUB's nearest_obj() Alexander Potapenko
  2016-07-12 18:12 ` [PATCH v7 2/2] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB Alexander Potapenko
  0 siblings, 2 replies; 5+ messages in thread
From: Alexander Potapenko @ 2016-07-12 18:12 UTC (permalink / raw)
  To: adech.fo, cl, dvyukov, akpm, rostedt, iamjoonsoo.kim, js1304,
	kcc, aryabinin, kuthonuzo.luruo
  Cc: kasan-dev, linux-mm, linux-kernel

This patch set enables stackdepot and quarantine for SLUB allocator and
fixes a problem with incorrect calculating the offset of the nearest
object in the presence of SLUB red zones.

Alexander Potapenko (2):
  mm, kasan: account for object redzone in SLUB's nearest_obj()
  mm, kasan: switch SLUB to stackdepot, enable memory quarantine for
    SLUB

 include/linux/kasan.h    |  2 ++
 include/linux/slab_def.h |  3 ++-
 include/linux/slub_def.h | 14 ++++++++---
 lib/Kconfig.kasan        |  4 +--
 mm/kasan/Makefile        |  3 +--
 mm/kasan/kasan.c         | 64 +++++++++++++++++++++++++++---------------------
 mm/kasan/kasan.h         |  3 +--
 mm/kasan/report.c        |  8 +++---
 mm/slab.h                |  2 ++
 mm/slub.c                | 59 +++++++++++++++++++++++++++++++++-----------
 10 files changed, 104 insertions(+), 58 deletions(-)

-- 
2.8.0.rc3.226.g39d4020

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v7 1/2] mm, kasan: account for object redzone in SLUB's nearest_obj()
  2016-07-12 18:12 [PATCH v7 0/2] mm, kasan: stackdepot and quarantine for SLUB Alexander Potapenko
@ 2016-07-12 18:12 ` Alexander Potapenko
  2016-07-12 20:02   ` Andrew Morton
  2016-07-12 18:12 ` [PATCH v7 2/2] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB Alexander Potapenko
  1 sibling, 1 reply; 5+ messages in thread
From: Alexander Potapenko @ 2016-07-12 18:12 UTC (permalink / raw)
  To: adech.fo, cl, dvyukov, akpm, rostedt, iamjoonsoo.kim, js1304,
	kcc, aryabinin, kuthonuzo.luruo
  Cc: kasan-dev, linux-mm, linux-kernel

When looking up the nearest SLUB object for a given address, correctly
calculate its offset if SLAB_RED_ZONE is enabled for that cache.

Fixes: 7ed2f9e663854db ("mm, kasan: SLAB support")
Signed-off-by: Alexander Potapenko <glider@google.com>
---
 include/linux/slub_def.h | 10 ++++++----
 mm/slub.c                |  2 +-
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d1faa01..b71b258 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -114,15 +114,17 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
 void object_err(struct kmem_cache *s, struct page *page,
 		u8 *object, char *reason);
 
+void *fixup_red_left(struct kmem_cache *s, void *p);
+
 static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
 				void *x) {
 	void *object = x - (x - page_address(page)) % cache->size;
 	void *last_object = page_address(page) +
 		(page->objects - 1) * cache->size;
-	if (unlikely(object > last_object))
-		return last_object;
-	else
-		return object;
+	void *result = (unlikely(object > last_object)) ? last_object : object;
+
+	result = fixup_red_left(cache, result);
+	return result;
 }
 
 #endif /* _LINUX_SLUB_DEF_H */
diff --git a/mm/slub.c b/mm/slub.c
index 825ff45..27cbef9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -124,7 +124,7 @@ static inline int kmem_cache_debug(struct kmem_cache *s)
 #endif
 }
 
-static inline void *fixup_red_left(struct kmem_cache *s, void *p)
+inline void *fixup_red_left(struct kmem_cache *s, void *p)
 {
 	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE)
 		p += s->red_left_pad;
-- 
2.8.0.rc3.226.g39d4020

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v7 2/2] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB
  2016-07-12 18:12 [PATCH v7 0/2] mm, kasan: stackdepot and quarantine for SLUB Alexander Potapenko
  2016-07-12 18:12 ` [PATCH v7 1/2] mm, kasan: account for object redzone in SLUB's nearest_obj() Alexander Potapenko
@ 2016-07-12 18:12 ` Alexander Potapenko
  1 sibling, 0 replies; 5+ messages in thread
From: Alexander Potapenko @ 2016-07-12 18:12 UTC (permalink / raw)
  To: adech.fo, cl, dvyukov, akpm, rostedt, iamjoonsoo.kim, js1304,
	kcc, aryabinin, kuthonuzo.luruo
  Cc: kasan-dev, linux-mm, linux-kernel

For KASAN builds:
 - switch SLUB allocator to using stackdepot instead of storing the
   allocation/deallocation stacks in the objects;
 - change the freelist hook so that parts of the freelist can be put
   into the quarantine.

Signed-off-by: Alexander Potapenko <glider@google.com>
---
v7: - addressed comments by Andrey Ryabinin:
      - split the nearest_obj() fix into a separate patch
      - introduce kasan_metadata_size()
      - move KASAN definitions back to mm/kasan/kasan.h
      - fix minor nits
    - addressed comments by Joonsoo Kim:
      - always return the unchanged freelist pointer from slab_free_hook()
      - account for KASAN metadata size in print_trailer()
      - fix minor nits
v6: - addressed comments by Andrey Ryabinin:
      - move nearest_obj() back to header files
      - fix check_pad_bytes() to address problems with poisoning
      - don't define __OBJECT_POISON to 0
      - simplify slab_free_freelist_hook() implementation
      - move KASAN definintions used by SLUB code to include/linux/kasan.h
      - fix minor nits
v5: - addressed comments by Andrey Ryabinin:
      - don't define SLAB_RED_ZONE, SLAB_POISON, SLAB_STORE_USER to 0
      - account for left redzone size when SLAB_RED_ZONE is used
    - incidentally moved the implementations of nearest_obj() to mm/sl[au]b.c
v4: - addressed comments by Andrey Ryabinin:
      - don't set slub_debug by default for everyone;
      - introduce the ___cache_free() helper function.
v3: - addressed comments by Andrey Ryabinin:
      - replaced KMALLOC_MAX_CACHE_SIZE with KMALLOC_MAX_SIZE in
        kasan_cache_create();
      - for caches with SLAB_KASAN flag set, their alloc_meta_offset and
        free_meta_offset are always valid.
v2: - incorporated kbuild fixes by Andrew Morton

---
 include/linux/kasan.h    |  2 ++
 include/linux/slab_def.h |  3 ++-
 include/linux/slub_def.h |  4 +++
 lib/Kconfig.kasan        |  4 +--
 mm/kasan/Makefile        |  3 +--
 mm/kasan/kasan.c         | 64 +++++++++++++++++++++++++++---------------------
 mm/kasan/kasan.h         |  3 +--
 mm/kasan/report.c        |  8 +++---
 mm/slab.h                |  2 ++
 mm/slub.c                | 57 ++++++++++++++++++++++++++++++++----------
 10 files changed, 97 insertions(+), 53 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index ac4b3c4..c9cf374 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -77,6 +77,7 @@ void kasan_free_shadow(const struct vm_struct *vm);
 
 size_t ksize(const void *);
 static inline void kasan_unpoison_slab(const void *ptr) { ksize(ptr); }
+size_t kasan_metadata_size(struct kmem_cache *cache);
 
 #else /* CONFIG_KASAN */
 
@@ -121,6 +122,7 @@ static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
 static inline void kasan_free_shadow(const struct vm_struct *vm) {}
 
 static inline void kasan_unpoison_slab(const void *ptr) { }
+static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
 
 #endif /* CONFIG_KASAN */
 
diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index 8694f7a..6f35df7 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -88,7 +88,8 @@ struct kmem_cache {
 };
 
 static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
-				void *x) {
+				void *x)
+{
 	void *object = x - (x - page->s_mem) % cache->size;
 	void *last_object = page->s_mem + (cache->num - 1) * cache->size;
 
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index b71b258..3b80dec 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -99,6 +99,10 @@ struct kmem_cache {
 	 */
 	int remote_node_defrag_ratio;
 #endif
+#ifdef CONFIG_KASAN
+	struct kasan_cache kasan_info;
+#endif
+
 	struct kmem_cache_node *node[MAX_NUMNODES];
 };
 
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 67d8c68..bd38aab 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,9 +5,9 @@ if HAVE_ARCH_KASAN
 
 config KASAN
 	bool "KASan: runtime memory debugger"
-	depends on SLUB_DEBUG || (SLAB && !DEBUG_SLAB)
+	depends on SLUB || (SLAB && !DEBUG_SLAB)
 	select CONSTRUCTORS
-	select STACKDEPOT if SLAB
+	select STACKDEPOT
 	help
 	  Enables kernel address sanitizer - runtime memory debugger,
 	  designed to find out-of-bounds accesses and use-after-free bugs.
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
index 1548749..2976a9e 100644
--- a/mm/kasan/Makefile
+++ b/mm/kasan/Makefile
@@ -7,5 +7,4 @@ CFLAGS_REMOVE_kasan.o = -pg
 # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
 CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
 
-obj-y := kasan.o report.o kasan_init.o
-obj-$(CONFIG_SLAB) += quarantine.o
+obj-y := kasan.o report.o kasan_init.o quarantine.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 6845f92..d92a7a2 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -351,7 +351,6 @@ void kasan_free_pages(struct page *page, unsigned int order)
 				KASAN_FREE_PAGE);
 }
 
-#ifdef CONFIG_SLAB
 /*
  * Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
  * For larger allocations larger redzones are used.
@@ -373,16 +372,10 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
 			unsigned long *flags)
 {
 	int redzone_adjust;
-	/* Make sure the adjusted size is still less than
-	 * KMALLOC_MAX_CACHE_SIZE.
-	 * TODO: this check is only useful for SLAB, but not SLUB. We'll need
-	 * to skip it for SLUB when it starts using kasan_cache_create().
-	 */
-	if (*size > KMALLOC_MAX_CACHE_SIZE -
-	    sizeof(struct kasan_alloc_meta) -
-	    sizeof(struct kasan_free_meta))
-		return;
-	*flags |= SLAB_KASAN;
+#ifdef CONFIG_SLAB
+	int orig_size = *size;
+#endif
+
 	/* Add alloc meta. */
 	cache->kasan_info.alloc_meta_offset = *size;
 	*size += sizeof(struct kasan_alloc_meta);
@@ -395,14 +388,31 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
 	}
 	redzone_adjust = optimal_redzone(cache->object_size) -
 		(*size - cache->object_size);
+
 	if (redzone_adjust > 0)
 		*size += redzone_adjust;
-	*size = min(KMALLOC_MAX_CACHE_SIZE,
+
+#ifdef CONFIG_SLAB
+	*size = min(KMALLOC_MAX_SIZE,
 		    max(*size,
 			cache->object_size +
 			optimal_redzone(cache->object_size)));
-}
+	/*
+	 * If the metadata doesn't fit, don't enable KASAN at all.
+	 */
+	if (*size <= cache->kasan_info.alloc_meta_offset ||
+			*size <= cache->kasan_info.free_meta_offset) {
+		*size = orig_size;
+		return;
+	}
+#else
+	*size = max(*size,
+			cache->object_size +
+			optimal_redzone(cache->object_size));
+
 #endif
+	*flags |= SLAB_KASAN;
+}
 
 void kasan_cache_shrink(struct kmem_cache *cache)
 {
@@ -414,6 +424,14 @@ void kasan_cache_destroy(struct kmem_cache *cache)
 	quarantine_remove_cache(cache);
 }
 
+size_t kasan_metadata_size(struct kmem_cache *cache)
+{
+	return (cache->kasan_info.alloc_meta_offset ?
+		sizeof(struct kasan_alloc_meta) : 0) +
+		(cache->kasan_info.free_meta_offset ?
+		sizeof(struct kasan_free_meta) : 0);
+}
+
 void kasan_poison_slab(struct page *page)
 {
 	kasan_poison_shadow(page_address(page),
@@ -431,16 +449,13 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
 	kasan_poison_shadow(object,
 			round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
 			KASAN_KMALLOC_REDZONE);
-#ifdef CONFIG_SLAB
 	if (cache->flags & SLAB_KASAN) {
 		struct kasan_alloc_meta *alloc_info =
 			get_alloc_info(cache, object);
 		alloc_info->state = KASAN_STATE_INIT;
 	}
-#endif
 }
 
-#ifdef CONFIG_SLAB
 static inline int in_irqentry_text(unsigned long ptr)
 {
 	return (ptr >= (unsigned long)&__irqentry_text_start &&
@@ -501,7 +516,6 @@ struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
 	BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
 	return (void *)object + cache->kasan_info.free_meta_offset;
 }
-#endif
 
 void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
 {
@@ -522,16 +536,16 @@ static void kasan_poison_slab_free(struct kmem_cache *cache, void *object)
 
 bool kasan_slab_free(struct kmem_cache *cache, void *object)
 {
-#ifdef CONFIG_SLAB
 	/* RCU slabs could be legally used after free within the RCU period */
 	if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU))
 		return false;
 
 	if (likely(cache->flags & SLAB_KASAN)) {
-		struct kasan_alloc_meta *alloc_info =
-			get_alloc_info(cache, object);
-		struct kasan_free_meta *free_info =
-			get_free_info(cache, object);
+		struct kasan_alloc_meta *alloc_info;
+		struct kasan_free_meta *free_info;
+
+		alloc_info = get_alloc_info(cache, object);
+		free_info = get_free_info(cache, object);
 
 		switch (alloc_info->state) {
 		case KASAN_STATE_ALLOC:
@@ -550,10 +564,6 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object)
 		}
 	}
 	return false;
-#else
-	kasan_poison_slab_free(cache, object);
-	return false;
-#endif
 }
 
 void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
@@ -576,7 +586,6 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
 	kasan_unpoison_shadow(object, size);
 	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
 		KASAN_KMALLOC_REDZONE);
-#ifdef CONFIG_SLAB
 	if (cache->flags & SLAB_KASAN) {
 		struct kasan_alloc_meta *alloc_info =
 			get_alloc_info(cache, object);
@@ -585,7 +594,6 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
 		alloc_info->alloc_size = size;
 		set_track(&alloc_info->track, flags);
 	}
-#endif
 }
 EXPORT_SYMBOL(kasan_kmalloc);
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index fb87923..31972cd 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -95,7 +95,6 @@ struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
 struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
 					const void *object);
 
-
 static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
 {
 	return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
@@ -110,7 +109,7 @@ static inline bool kasan_report_enabled(void)
 void kasan_report(unsigned long addr, size_t size,
 		bool is_write, unsigned long ip);
 
-#ifdef CONFIG_SLAB
+#if defined(CONFIG_SLAB) || defined(CONFIG_SLUB)
 void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
 void quarantine_reduce(void);
 void quarantine_remove_cache(struct kmem_cache *cache);
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index b3c122d..861b977 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -116,7 +116,6 @@ static inline bool init_task_stack_addr(const void *addr)
 			sizeof(init_thread_union.stack));
 }
 
-#ifdef CONFIG_SLAB
 static void print_track(struct kasan_track *track)
 {
 	pr_err("PID = %u\n", track->pid);
@@ -130,8 +129,8 @@ static void print_track(struct kasan_track *track)
 	}
 }
 
-static void object_err(struct kmem_cache *cache, struct page *page,
-			void *object, char *unused_reason)
+static void kasan_object_err(struct kmem_cache *cache, struct page *page,
+				void *object, char *unused_reason)
 {
 	struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object);
 	struct kasan_free_meta *free_info;
@@ -162,7 +161,6 @@ static void object_err(struct kmem_cache *cache, struct page *page,
 		break;
 	}
 }
-#endif
 
 static void print_address_description(struct kasan_access_info *info)
 {
@@ -177,7 +175,7 @@ static void print_address_description(struct kasan_access_info *info)
 			struct kmem_cache *cache = page->slab_cache;
 			object = nearest_obj(cache, page,
 						(void *)info->access_addr);
-			object_err(cache, page, object,
+			kasan_object_err(cache, page, object,
 					"kasan: bad access detected");
 			return;
 		}
diff --git a/mm/slab.h b/mm/slab.h
index dedb1a9..d1e6f14 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -366,6 +366,8 @@ static inline size_t slab_ksize(const struct kmem_cache *s)
 	if (s->flags & (SLAB_RED_ZONE | SLAB_POISON))
 		return s->object_size;
 # endif
+	if (s->flags & SLAB_KASAN)
+		return s->object_size;
 	/*
 	 * If we have the need to store the freelist pointer
 	 * back there or track user information then we can
diff --git a/mm/slub.c b/mm/slub.c
index 27cbef9..d9a8c98 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -454,8 +454,6 @@ static inline void *restore_red_left(struct kmem_cache *s, void *p)
  */
 #if defined(CONFIG_SLUB_DEBUG_ON)
 static int slub_debug = DEBUG_DEFAULT_FLAGS;
-#elif defined(CONFIG_KASAN)
-static int slub_debug = SLAB_STORE_USER;
 #else
 static int slub_debug;
 #endif
@@ -660,6 +658,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	if (s->flags & SLAB_STORE_USER)
 		off += 2 * sizeof(struct track);
 
+	off += kasan_metadata_size(s);
+
 	if (off != size_from_object(s))
 		/* Beginning of the filler is the free pointer */
 		print_section("Padding ", p + off, size_from_object(s) - off);
@@ -787,6 +787,8 @@ static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *p)
 		/* We also have user information there */
 		off += 2 * sizeof(struct track);
 
+	off += kasan_metadata_size(s);
+
 	if (size_from_object(s) == off)
 		return 1;
 
@@ -1322,8 +1324,10 @@ static inline void kfree_hook(const void *x)
 	kasan_kfree_large(x);
 }
 
-static inline void slab_free_hook(struct kmem_cache *s, void *x)
+static inline void *slab_free_hook(struct kmem_cache *s, void *x)
 {
+	void *freeptr;
+
 	kmemleak_free_recursive(x, s->flags);
 
 	/*
@@ -1344,7 +1348,13 @@ static inline void slab_free_hook(struct kmem_cache *s, void *x)
 	if (!(s->flags & SLAB_DEBUG_OBJECTS))
 		debug_check_no_obj_freed(x, s->object_size);
 
+	freeptr = get_freepointer(s, x);
+	/*
+	 * kasan_slab_free() may put x into memory quarantine, delaying its
+	 * reuse. In this case the object's freelist pointer is changed.
+	 */
 	kasan_slab_free(s, x);
+	return freeptr;
 }
 
 static inline void slab_free_freelist_hook(struct kmem_cache *s,
@@ -1362,11 +1372,11 @@ static inline void slab_free_freelist_hook(struct kmem_cache *s,
 
 	void *object = head;
 	void *tail_obj = tail ? : head;
+	void *freeptr;
 
 	do {
-		slab_free_hook(s, object);
-	} while ((object != tail_obj) &&
-		 (object = get_freepointer(s, object)));
+		freeptr = slab_free_hook(s, object);
+	} while ((object != tail_obj) && (object = freeptr));
 #endif
 }
 
@@ -2768,16 +2778,13 @@ slab_empty:
  * same page) possible by specifying head and tail ptr, plus objects
  * count (cnt). Bulk free indicated by tail pointer being set.
  */
-static __always_inline void slab_free(struct kmem_cache *s, struct page *page,
-				      void *head, void *tail, int cnt,
-				      unsigned long addr)
+static __always_inline void do_slab_free(struct kmem_cache *s,
+				struct page *page, void *head, void *tail,
+				int cnt, unsigned long addr)
 {
 	void *tail_obj = tail ? : head;
 	struct kmem_cache_cpu *c;
 	unsigned long tid;
-
-	slab_free_freelist_hook(s, head, tail);
-
 redo:
 	/*
 	 * Determine the currently cpus per cpu slab.
@@ -2811,6 +2818,27 @@ redo:
 
 }
 
+static __always_inline void slab_free(struct kmem_cache *s, struct page *page,
+				      void *head, void *tail, int cnt,
+				      unsigned long addr)
+{
+	slab_free_freelist_hook(s, head, tail);
+	/*
+	 * slab_free_freelist_hook() could have put the items into quarantine.
+	 * If so, no need to free them.
+	 */
+	if (s->flags & SLAB_KASAN && !(s->flags & SLAB_DESTROY_BY_RCU))
+		return;
+	do_slab_free(s, page, head, tail, cnt, addr);
+}
+
+#ifdef CONFIG_KASAN
+void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr)
+{
+	do_slab_free(cache, virt_to_head_page(x), x, NULL, 1, addr);
+}
+#endif
+
 void kmem_cache_free(struct kmem_cache *s, void *x)
 {
 	s = cache_from_obj(s, x);
@@ -3252,7 +3280,7 @@ static void set_min_partial(struct kmem_cache *s, unsigned long min)
 static int calculate_sizes(struct kmem_cache *s, int forced_order)
 {
 	unsigned long flags = s->flags;
-	unsigned long size = s->object_size;
+	size_t size = s->object_size;
 	int order;
 
 	/*
@@ -3311,7 +3339,10 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 		 * the object.
 		 */
 		size += 2 * sizeof(struct track);
+#endif
 
+	kasan_cache_create(s, &size, &s->flags);
+#ifdef CONFIG_SLUB_DEBUG
 	if (flags & SLAB_RED_ZONE) {
 		/*
 		 * Add some empty padding so that we can catch
-- 
2.8.0.rc3.226.g39d4020

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v7 1/2] mm, kasan: account for object redzone in SLUB's nearest_obj()
  2016-07-12 18:12 ` [PATCH v7 1/2] mm, kasan: account for object redzone in SLUB's nearest_obj() Alexander Potapenko
@ 2016-07-12 20:02   ` Andrew Morton
  2016-07-13 12:14     ` Alexander Potapenko
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2016-07-12 20:02 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: adech.fo, cl, dvyukov, rostedt, iamjoonsoo.kim, js1304, kcc,
	aryabinin, kuthonuzo.luruo, kasan-dev, linux-mm, linux-kernel

On Tue, 12 Jul 2016 20:12:44 +0200 Alexander Potapenko <glider@google.com> wrote:

> When looking up the nearest SLUB object for a given address, correctly
> calculate its offset if SLAB_RED_ZONE is enabled for that cache.

What are the runtime effects of this fix?  Please always include this
info when fixing bugs so that others can decide which kernel(s) need
patching.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v7 1/2] mm, kasan: account for object redzone in SLUB's nearest_obj()
  2016-07-12 20:02   ` Andrew Morton
@ 2016-07-13 12:14     ` Alexander Potapenko
  0 siblings, 0 replies; 5+ messages in thread
From: Alexander Potapenko @ 2016-07-13 12:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrey Konovalov, Christoph Lameter, Dmitriy Vyukov,
	Steven Rostedt, Joonsoo Kim, Joonsoo Kim, Kostya Serebryany,
	Andrey Ryabinin, Kuthonuzo Luruo, kasan-dev,
	Linux Memory Management List, LKML

Changed the description as follows:

=================================
    mm, kasan: account for object redzone in SLUB's nearest_obj()

    When looking up the nearest SLUB object for a given address, correctly
    calculate its offset if SLAB_RED_ZONE is enabled for that cache.

    Previously, when KASAN had detected an error on an object from a cache
    with SLAB_RED_ZONE set, the actual start address of the object was
    miscalculated, which led to random stacks having been reported.

    Fixes: 7ed2f9e663854db ("mm, kasan: SLAB support")
    Signed-off-by: Alexander Potapenko <glider@google.com>
=================================

To avoid sending both patches and the cover page again, I'm going to
wait for other comments.

On Tue, Jul 12, 2016 at 10:02 PM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> On Tue, 12 Jul 2016 20:12:44 +0200 Alexander Potapenko <glider@google.com> wrote:
>
>> When looking up the nearest SLUB object for a given address, correctly
>> calculate its offset if SLAB_RED_ZONE is enabled for that cache.
>
> What are the runtime effects of this fix?  Please always include this
> info when fixing bugs so that others can decide which kernel(s) need
> patching.
>



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-07-13 12:14 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-12 18:12 [PATCH v7 0/2] mm, kasan: stackdepot and quarantine for SLUB Alexander Potapenko
2016-07-12 18:12 ` [PATCH v7 1/2] mm, kasan: account for object redzone in SLUB's nearest_obj() Alexander Potapenko
2016-07-12 20:02   ` Andrew Morton
2016-07-13 12:14     ` Alexander Potapenko
2016-07-12 18:12 ` [PATCH v7 2/2] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB Alexander Potapenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).