All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer
@ 2018-03-02 19:44 Andrey Konovalov
  2018-03-02 19:44 ` [RFC PATCH 01/14] khwasan: change kasan hooks signatures Andrey Konovalov
                   ` (14 more replies)
  0 siblings, 15 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

This patchset adds a new mode to KASAN, which is called KHWASAN (Kernel
HardWare assisted Address SANitizer). There's still some work to do and
there are a few TODOs in the code, so I'm publishing this as a RFC to
collect some initial feedback.

The plan is to implement HWASan [1] for the kernel with the incentive,
that it's going to have comparable performance, but in the same time
consume much less memory, trading that off for somewhat imprecise bug
detection and being supported only for arm64.

The overall idea of the approach used by KHWASAN is the following:

1. By using the Top Byte Ignore arm64 CPU feature, we can store pointer
   tags in the top byte of each kernel pointer.

2. Using shadow memory, we can store memory tags for each chunk of kernel
   memory.

3. On each memory allocation, we can generate a random tag, embed it into
   the returned pointer and set the memory tags that correspond to this
   chunk of memory to the same value.

4. By using compiler instrumentation, before each memory access we can add
   a check that the pointer tag matches the tag of the memory that is being
   accessed.

5. On a tag mismatch we report an error.

[1] http://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html


====== Technical details

KHWASAN is implemented in a very similar way to KASAN. This patchset
essentially does the following:

1. TCR_TBI1 is set to enable Top Byte Ignore.

2. Shadow memory is used (with a different scale, 1:16, so each shadow
   byte corresponds to 16 bytes of kernel memory) to store memory tags.

3. All slab objects are aligned to shadow scale, which is 16 bytes.

4. All pointers returned from the slab allocator are tagged with a random
   tag and the corresponding shadow memory is poisoned with the same value.

5. Compiler instrumentation is used to insert tag checks. Either by
   calling callbacks or by inlining them (CONFIG_KASAN_OUTLINE and
   CONFIG_KASAN_INLINE flags are reused).

6. When a tag mismatch is detected in callback instrumentation mode
   KHWASAN simply prints a bug report. In case of inline instrumentation,
   clang inserts a brk instruction, and KHWASAN has it's own brk handler,
   which reports the bug.

7. The memory in between slab objects is marked with a random tag, and
   acts as a redzone.

Bug detection is imprecise for two reasons:

1. We won't catch some small out-of-bounds accesses, that fall into the
   same shadow cell.

2. We only have 1 byte to store tags, which means we have a 1/256
   probability of a tag match for an incorrect access.


====== Benchmarks

As of now I've only did a few simple tests of KHWASAN on arm64 in QEMU
emulation mode. I'm yet to perform proper benchmarks on actual hardware.

These are the numbers I got with the current prototype and they are likely
to change.

Boot time:
* ~3.5 sec for clean kernel
* ~5.6 sec for KASAN
* ~8.9 sec for KHWASAN

The difference in KASAN and KHWASAN performance here can be explained by
QEMU performance drop when it needs to emulate Top Byte Ignore. I don't
think there's any reason to belive that the final implementation will
cause significant performance drop compared to KASAN on actual hardware.

Slab memory usage after boot:
* ~15 kb for clean kernel
* ~60 kb for KASAN
* ~16 kb for KHWASAN

Note, that KHWASAN (compared to KASAN) doesn't require quarantine and uses
twice as less shadow memory (1/16th vs 1/8th).


====== Some notes

A few notes:

1. The patchset can be found here:
   https://github.com/xairy/kasan-prototype/tree/khwasan

2. Building requires a recent LLVM version (r325711 or later).

3. Stack instrumentation is not supported yet (in progress).

4. There's at least one issue with using the top byte of kernel pointers,
   see the jbd2 commit for details.

5. There's still a few TODOs in the code, that need to be addressed.


Andrey Konovalov (14):
  khwasan: change kasan hooks signatures
  khwasan: move common kasan and khwasan code to common.c
  khwasan: add CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS
  khwasan: adjust shadow size for CONFIG_KASAN_TAGS
  khwasan: initialize shadow to 0xff
  khwasan: enable top byte ignore for the kernel
  khwasan: add tag related helper functions
  khwasan: perform untagged pointers comparison in krealloc
  khwasan: add hooks implementation
  khwasan: add bug reporting routines
  khwasan: add brk handler for inline instrumentation
  khwasan, jbd2: add khwasan annotations
  khwasan: update kasan documentation
  khwasan: default the instrumentation mode to inline

 Documentation/dev-tools/kasan.rst      | 212 +++++++++-------
 arch/arm64/Kconfig                     |   1 +
 arch/arm64/Makefile                    |   2 +-
 arch/arm64/include/asm/brk-imm.h       |   2 +
 arch/arm64/include/asm/memory.h        |  13 +-
 arch/arm64/include/asm/pgtable-hwdef.h |   1 +
 arch/arm64/kernel/traps.c              |  40 +++
 arch/arm64/mm/kasan_init.c             |  13 +-
 arch/arm64/mm/proc.S                   |   8 +-
 fs/jbd2/journal.c                      |   6 +
 include/linux/compiler-clang.h         |   7 +-
 include/linux/compiler-gcc.h           |   4 +
 include/linux/compiler.h               |   3 +-
 include/linux/kasan.h                  |  84 ++++--
 lib/Kconfig.kasan                      |  70 +++--
 mm/kasan/Makefile                      |   9 +-
 mm/kasan/common.c                      | 325 ++++++++++++++++++++++++
 mm/kasan/kasan.c                       | 302 +---------------------
 mm/kasan/kasan.h                       |  29 +++
 mm/kasan/khwasan.c                     | 338 +++++++++++++++++++++++++
 mm/kasan/report.c                      |  88 ++++++-
 mm/slab.c                              |  12 +-
 mm/slab.h                              |   2 +-
 mm/slab_common.c                       |   6 +-
 mm/slub.c                              |  18 +-
 scripts/Makefile.kasan                 |  32 ++-
 26 files changed, 1177 insertions(+), 450 deletions(-)
 create mode 100644 mm/kasan/common.c
 create mode 100644 mm/kasan/khwasan.c

-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [RFC PATCH 01/14] khwasan: change kasan hooks signatures
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-02 19:44 ` [RFC PATCH 02/14] khwasan: move common kasan and khwasan code to common.c Andrey Konovalov
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

KHWASAN will change the value of the top byte of pointers returned from the
kernel allocation functions (such as kmalloc). This patch updates KASAN
hooks signatures and their usage in SLAB and SLUB code to reflect that.
---
 include/linux/kasan.h | 34 +++++++++++++++++++++++-----------
 mm/kasan/kasan.c      | 24 ++++++++++++++----------
 mm/slab.c             | 12 ++++++------
 mm/slab.h             |  2 +-
 mm/slab_common.c      |  4 ++--
 mm/slub.c             | 16 ++++++++--------
 6 files changed, 54 insertions(+), 38 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index adc13474a53b..3bfebcf7ad2b 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -53,14 +53,14 @@ void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
 void kasan_poison_object_data(struct kmem_cache *cache, void *object);
 void kasan_init_slab_obj(struct kmem_cache *cache, const void *object);
 
-void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags);
+void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags);
 void kasan_kfree_large(void *ptr, unsigned long ip);
 void kasan_poison_kfree(void *ptr, unsigned long ip);
-void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size,
+void *kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size,
 		  gfp_t flags);
-void kasan_krealloc(const void *object, size_t new_size, gfp_t flags);
+void *kasan_krealloc(const void *object, size_t new_size, gfp_t flags);
 
-void kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags);
+void *kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags);
 bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
 
 struct kasan_cache {
@@ -105,16 +105,28 @@ static inline void kasan_poison_object_data(struct kmem_cache *cache,
 static inline void kasan_init_slab_obj(struct kmem_cache *cache,
 				const void *object) {}
 
-static inline void kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) {}
+static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
+{
+	return ptr;
+}
 static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
 static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
-static inline void kasan_kmalloc(struct kmem_cache *s, const void *object,
-				size_t size, gfp_t flags) {}
-static inline void kasan_krealloc(const void *object, size_t new_size,
-				 gfp_t flags) {}
+static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
+				size_t size, gfp_t flags)
+{
+	return (void *)object;
+}
+static inline void *kasan_krealloc(const void *object, size_t new_size,
+				 gfp_t flags)
+{
+	return (void *)object;
+}
 
-static inline void kasan_slab_alloc(struct kmem_cache *s, void *object,
-				   gfp_t flags) {}
+static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
+				   gfp_t flags)
+{
+	return object;
+}
 static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
 				   unsigned long ip)
 {
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index e13d911251e7..d8cb63bd1ecc 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -484,9 +484,9 @@ void kasan_init_slab_obj(struct kmem_cache *cache, const void *object)
 	__memset(alloc_info, 0, sizeof(*alloc_info));
 }
 
-void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
+void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
 {
-	kasan_kmalloc(cache, object, cache->object_size, flags);
+	return kasan_kmalloc(cache, object, cache->object_size, flags);
 }
 
 static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
@@ -527,7 +527,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
 	return __kasan_slab_free(cache, object, ip, true);
 }
 
-void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
+void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
 		   gfp_t flags)
 {
 	unsigned long redzone_start;
@@ -537,7 +537,7 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
 		quarantine_reduce();
 
 	if (unlikely(object == NULL))
-		return;
+		return NULL;
 
 	redzone_start = round_up((unsigned long)(object + size),
 				KASAN_SHADOW_SCALE_SIZE);
@@ -550,10 +550,12 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size,
 
 	if (cache->flags & SLAB_KASAN)
 		set_track(&get_alloc_info(cache, object)->alloc_track, flags);
+
+	return (void *)object;
 }
 EXPORT_SYMBOL(kasan_kmalloc);
 
-void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
+void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
 {
 	struct page *page;
 	unsigned long redzone_start;
@@ -563,7 +565,7 @@ void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
 		quarantine_reduce();
 
 	if (unlikely(ptr == NULL))
-		return;
+		return NULL;
 
 	page = virt_to_page(ptr);
 	redzone_start = round_up((unsigned long)(ptr + size),
@@ -573,21 +575,23 @@ void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
 	kasan_unpoison_shadow(ptr, size);
 	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
 		KASAN_PAGE_REDZONE);
+
+	return (void *)ptr;
 }
 
-void kasan_krealloc(const void *object, size_t size, gfp_t flags)
+void *kasan_krealloc(const void *object, size_t size, gfp_t flags)
 {
 	struct page *page;
 
 	if (unlikely(object == ZERO_SIZE_PTR))
-		return;
+		return ZERO_SIZE_PTR;
 
 	page = virt_to_head_page(object);
 
 	if (unlikely(!PageSlab(page)))
-		kasan_kmalloc_large(object, size, flags);
+		return kasan_kmalloc_large(object, size, flags);
 	else
-		kasan_kmalloc(page->slab_cache, object, size, flags);
+		return kasan_kmalloc(page->slab_cache, object, size, flags);
 }
 
 void kasan_poison_kfree(void *ptr, unsigned long ip)
diff --git a/mm/slab.c b/mm/slab.c
index 324446621b3e..ec6a9e8696ab 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3538,7 +3538,7 @@ void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags)
 {
 	void *ret = slab_alloc(cachep, flags, _RET_IP_);
 
-	kasan_slab_alloc(cachep, ret, flags);
+	ret = kasan_slab_alloc(cachep, ret, flags);
 	trace_kmem_cache_alloc(_RET_IP_, ret,
 			       cachep->object_size, cachep->size, flags);
 
@@ -3604,7 +3604,7 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size)
 
 	ret = slab_alloc(cachep, flags, _RET_IP_);
 
-	kasan_kmalloc(cachep, ret, size, flags);
+	ret = kasan_kmalloc(cachep, ret, size, flags);
 	trace_kmalloc(_RET_IP_, ret,
 		      size, cachep->size, flags);
 	return ret;
@@ -3628,7 +3628,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid)
 {
 	void *ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_);
 
-	kasan_slab_alloc(cachep, ret, flags);
+	ret = kasan_slab_alloc(cachep, ret, flags);
 	trace_kmem_cache_alloc_node(_RET_IP_, ret,
 				    cachep->object_size, cachep->size,
 				    flags, nodeid);
@@ -3647,7 +3647,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep,
 
 	ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_);
 
-	kasan_kmalloc(cachep, ret, size, flags);
+	ret = kasan_kmalloc(cachep, ret, size, flags);
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, cachep->size,
 			   flags, nodeid);
@@ -3666,7 +3666,7 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller)
 	if (unlikely(ZERO_OR_NULL_PTR(cachep)))
 		return cachep;
 	ret = kmem_cache_alloc_node_trace(cachep, flags, node, size);
-	kasan_kmalloc(cachep, ret, size, flags);
+	ret = kasan_kmalloc(cachep, ret, size, flags);
 
 	return ret;
 }
@@ -3702,7 +3702,7 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags,
 		return cachep;
 	ret = slab_alloc(cachep, flags, caller);
 
-	kasan_kmalloc(cachep, ret, size, flags);
+	ret = kasan_kmalloc(cachep, ret, size, flags);
 	trace_kmalloc(caller, ret,
 		      size, cachep->size, flags);
 
diff --git a/mm/slab.h b/mm/slab.h
index 51813236e773..8a588d9d89a0 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -440,7 +440,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
 
 		kmemleak_alloc_recursive(object, s->object_size, 1,
 					 s->flags, flags);
-		kasan_slab_alloc(s, object, flags);
+		p[i] = kasan_slab_alloc(s, object, flags);
 	}
 
 	if (memcg_kmem_enabled())
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 10f127b2de7c..a33e61315ca6 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1164,7 +1164,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_pages(flags, order);
 	ret = page ? page_address(page) : NULL;
 	kmemleak_alloc(ret, size, 1, flags);
-	kasan_kmalloc_large(ret, size, flags);
+	ret = kasan_kmalloc_large(ret, size, flags);
 	return ret;
 }
 EXPORT_SYMBOL(kmalloc_order);
@@ -1442,7 +1442,7 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
 		ks = ksize(p);
 
 	if (ks >= new_size) {
-		kasan_krealloc((void *)p, new_size, flags);
+		p = kasan_krealloc((void *)p, new_size, flags);
 		return (void *)p;
 	}
 
diff --git a/mm/slub.c b/mm/slub.c
index f111c2a908b9..4a856512f225 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1350,10 +1350,10 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
  * Hooks for other subsystems that check memory allocations. In a typical
  * production configuration these hooks all should produce no code at all.
  */
-static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
+static inline void kmalloc_large_node_hook(void **ptr, size_t size, gfp_t flags)
 {
-	kmemleak_alloc(ptr, size, 1, flags);
-	kasan_kmalloc_large(ptr, size, flags);
+	kmemleak_alloc(*ptr, size, 1, flags);
+	*ptr = kasan_kmalloc_large(*ptr, size, flags);
 }
 
 static __always_inline void kfree_hook(void *x)
@@ -2758,7 +2758,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc(s, gfpflags, _RET_IP_);
 	trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags);
-	kasan_kmalloc(s, ret, size, gfpflags);
+	ret = kasan_kmalloc(s, ret, size, gfpflags);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_trace);
@@ -2786,7 +2786,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 	trace_kmalloc_node(_RET_IP_, ret,
 			   size, s->size, gfpflags, node);
 
-	kasan_kmalloc(s, ret, size, gfpflags);
+	ret = kasan_kmalloc(s, ret, size, gfpflags);
 	return ret;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
@@ -3767,7 +3767,7 @@ void *__kmalloc(size_t size, gfp_t flags)
 
 	trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
 
-	kasan_kmalloc(s, ret, size, flags);
+	ret = kasan_kmalloc(s, ret, size, flags);
 
 	return ret;
 }
@@ -3784,7 +3784,7 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node)
 	if (page)
 		ptr = page_address(page);
 
-	kmalloc_large_node_hook(ptr, size, flags);
+	kmalloc_large_node_hook(&ptr, size, flags);
 	return ptr;
 }
 
@@ -3812,7 +3812,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 
 	trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
 
-	kasan_kmalloc(s, ret, size, flags);
+	ret = kasan_kmalloc(s, ret, size, flags);
 
 	return ret;
 }
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 02/14] khwasan: move common kasan and khwasan code to common.c
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
  2018-03-02 19:44 ` [RFC PATCH 01/14] khwasan: change kasan hooks signatures Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-02 19:44 ` [RFC PATCH 03/14] khwasan: add CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS Andrey Konovalov
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

KHWASAN will reuse a significant part of KASAN code, so move the common
parts to common.c without any functional changes.
---
 mm/kasan/Makefile |   5 +-
 mm/kasan/common.c | 318 ++++++++++++++++++++++++++++++++++++++++++++++
 mm/kasan/kasan.c  | 288 +----------------------------------------
 mm/kasan/kasan.h  |   4 +
 4 files changed, 330 insertions(+), 285 deletions(-)
 create mode 100644 mm/kasan/common.c

diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
index 3289db38bc87..a6df14bffb6b 100644
--- a/mm/kasan/Makefile
+++ b/mm/kasan/Makefile
@@ -1,11 +1,14 @@
 # SPDX-License-Identifier: GPL-2.0
 KASAN_SANITIZE := n
+UBSAN_SANITIZE_common.o := n
 UBSAN_SANITIZE_kasan.o := n
 KCOV_INSTRUMENT := n
 
 CFLAGS_REMOVE_kasan.o = -pg
 # Function splitter causes unnecessary splits in __asan_load1/__asan_store1
 # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+
+CFLAGS_common.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
 CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
 
-obj-y := kasan.o report.o kasan_init.o quarantine.o
+obj-y := common.o kasan.o report.o kasan_init.o quarantine.o
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
new file mode 100644
index 000000000000..08f6c8cb9f84
--- /dev/null
+++ b/mm/kasan/common.c
@@ -0,0 +1,318 @@
+/*
+ * This file contains common KASAN and KHWASAN code.
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+ *
+ * Some code borrowed from https://github.com/xairy/kasan-prototype by
+ *        Andrey Konovalov <andreyknvl@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/export.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/kasan.h>
+#include <linux/kernel.h>
+#include <linux/kmemleak.h>
+#include <linux/linkage.h>
+#include <linux/memblock.h>
+#include <linux/memory.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/vmalloc.h>
+#include <linux/bug.h>
+
+#include "kasan.h"
+#include "../slab.h"
+
+void kasan_enable_current(void)
+{
+	current->kasan_depth++;
+}
+
+void kasan_disable_current(void)
+{
+	current->kasan_depth--;
+}
+
+static void __kasan_unpoison_stack(struct task_struct *task, const void *sp)
+{
+	void *base = task_stack_page(task);
+	size_t size = sp - base;
+
+	kasan_unpoison_shadow(base, size);
+}
+
+/* Unpoison the entire stack for a task. */
+void kasan_unpoison_task_stack(struct task_struct *task)
+{
+	__kasan_unpoison_stack(task, task_stack_page(task) + THREAD_SIZE);
+}
+
+/* Unpoison the stack for the current task beyond a watermark sp value. */
+asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
+{
+	/*
+	 * Calculate the task stack base address.  Avoid using 'current'
+	 * because this function is called by early resume code which hasn't
+	 * yet set up the percpu register (%gs).
+	 */
+	void *base = (void *)((unsigned long)watermark & ~(THREAD_SIZE - 1));
+
+	kasan_unpoison_shadow(base, watermark - base);
+}
+
+/*
+ * Clear all poison for the region between the current SP and a provided
+ * watermark value, as is sometimes required prior to hand-crafted asm function
+ * returns in the middle of functions.
+ */
+void kasan_unpoison_stack_above_sp_to(const void *watermark)
+{
+	const void *sp = __builtin_frame_address(0);
+	size_t size = watermark - sp;
+
+	if (WARN_ON(sp > watermark))
+		return;
+	kasan_unpoison_shadow(sp, size);
+}
+
+void kasan_check_read(const volatile void *p, unsigned int size)
+{
+	check_memory_region((unsigned long)p, size, false, _RET_IP_);
+}
+EXPORT_SYMBOL(kasan_check_read);
+
+void kasan_check_write(const volatile void *p, unsigned int size)
+{
+	check_memory_region((unsigned long)p, size, true, _RET_IP_);
+}
+EXPORT_SYMBOL(kasan_check_write);
+
+#undef memset
+void *memset(void *addr, int c, size_t len)
+{
+	check_memory_region((unsigned long)addr, len, true, _RET_IP_);
+
+	return __memset(addr, c, len);
+}
+
+#undef memmove
+void *memmove(void *dest, const void *src, size_t len)
+{
+	check_memory_region((unsigned long)src, len, false, _RET_IP_);
+	check_memory_region((unsigned long)dest, len, true, _RET_IP_);
+
+	return __memmove(dest, src, len);
+}
+
+#undef memcpy
+void *memcpy(void *dest, const void *src, size_t len)
+{
+	check_memory_region((unsigned long)src, len, false, _RET_IP_);
+	check_memory_region((unsigned long)dest, len, true, _RET_IP_);
+
+	return __memcpy(dest, src, len);
+}
+
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+	if (likely(!PageHighMem(page)))
+		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+size_t kasan_metadata_size(struct kmem_cache *cache)
+{
+	return (cache->kasan_info.alloc_meta_offset ?
+		sizeof(struct kasan_alloc_meta) : 0) +
+		(cache->kasan_info.free_meta_offset ?
+		sizeof(struct kasan_free_meta) : 0);
+}
+
+void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+{
+	kasan_unpoison_shadow(object, cache->object_size);
+}
+
+static inline int in_irqentry_text(unsigned long ptr)
+{
+	return (ptr >= (unsigned long)&__irqentry_text_start &&
+		ptr < (unsigned long)&__irqentry_text_end) ||
+		(ptr >= (unsigned long)&__softirqentry_text_start &&
+		 ptr < (unsigned long)&__softirqentry_text_end);
+}
+
+static inline void filter_irq_stacks(struct stack_trace *trace)
+{
+	int i;
+
+	if (!trace->nr_entries)
+		return;
+	for (i = 0; i < trace->nr_entries; i++)
+		if (in_irqentry_text(trace->entries[i])) {
+			/* Include the irqentry function into the stack. */
+			trace->nr_entries = i + 1;
+			break;
+		}
+}
+
+static inline depot_stack_handle_t save_stack(gfp_t flags)
+{
+	unsigned long entries[KASAN_STACK_DEPTH];
+	struct stack_trace trace = {
+		.nr_entries = 0,
+		.entries = entries,
+		.max_entries = KASAN_STACK_DEPTH,
+		.skip = 0
+	};
+
+	save_stack_trace(&trace);
+	filter_irq_stacks(&trace);
+	if (trace.nr_entries != 0 &&
+	    trace.entries[trace.nr_entries-1] == ULONG_MAX)
+		trace.nr_entries--;
+
+	return depot_save_stack(&trace, flags);
+}
+
+void set_track(struct kasan_track *track, gfp_t flags)
+{
+	track->pid = current->pid;
+	track->stack = save_stack(flags);
+}
+
+struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
+					const void *object)
+{
+	BUILD_BUG_ON(sizeof(struct kasan_alloc_meta) > 32);
+	return (void *)object + cache->kasan_info.alloc_meta_offset;
+}
+
+struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
+				      const void *object)
+{
+	BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
+	return (void *)object + cache->kasan_info.free_meta_offset;
+}
+
+void kasan_init_slab_obj(struct kmem_cache *cache, const void *object)
+{
+	struct kasan_alloc_meta *alloc_info;
+
+	if (!(cache->flags & SLAB_KASAN))
+		return;
+
+	alloc_info = get_alloc_info(cache, object);
+	__memset(alloc_info, 0, sizeof(*alloc_info));
+}
+
+void *kasan_krealloc(const void *object, size_t size, gfp_t flags)
+{
+	struct page *page;
+
+	if (unlikely(object == ZERO_SIZE_PTR))
+		return (void *)object;
+
+	page = virt_to_head_page(object);
+
+	if (unlikely(!PageSlab(page)))
+		return kasan_kmalloc_large(object, size, flags);
+	else
+		return kasan_kmalloc(page->slab_cache, object, size, flags);
+}
+
+int kasan_module_alloc(void *addr, size_t size)
+{
+	void *ret;
+	size_t shadow_size;
+	unsigned long shadow_start;
+
+	shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
+	shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
+			PAGE_SIZE);
+
+	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
+		return -EINVAL;
+
+	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
+			shadow_start + shadow_size,
+			GFP_KERNEL | __GFP_ZERO,
+			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
+			__builtin_return_address(0));
+
+	if (ret) {
+		find_vm_area(addr)->flags |= VM_KASAN;
+		kmemleak_ignore(ret);
+		return 0;
+	}
+
+	return -ENOMEM;
+}
+
+void kasan_free_shadow(const struct vm_struct *vm)
+{
+	if (vm->flags & VM_KASAN)
+		vfree(kasan_mem_to_shadow(vm->addr));
+}
+
+#ifdef CONFIG_MEMORY_HOTPLUG
+static int __meminit kasan_mem_notifier(struct notifier_block *nb,
+			unsigned long action, void *data)
+{
+	struct memory_notify *mem_data = data;
+	unsigned long nr_shadow_pages, start_kaddr, shadow_start;
+	unsigned long shadow_end, shadow_size;
+
+	nr_shadow_pages = mem_data->nr_pages >> KASAN_SHADOW_SCALE_SHIFT;
+	start_kaddr = (unsigned long)pfn_to_kaddr(mem_data->start_pfn);
+	shadow_start = (unsigned long)kasan_mem_to_shadow((void *)start_kaddr);
+	shadow_size = nr_shadow_pages << PAGE_SHIFT;
+	shadow_end = shadow_start + shadow_size;
+
+	if (WARN_ON(mem_data->nr_pages % KASAN_SHADOW_SCALE_SIZE) ||
+		WARN_ON(start_kaddr % (KASAN_SHADOW_SCALE_SIZE << PAGE_SHIFT)))
+		return NOTIFY_BAD;
+
+	switch (action) {
+	case MEM_GOING_ONLINE: {
+		void *ret;
+
+		ret = __vmalloc_node_range(shadow_size, PAGE_SIZE, shadow_start,
+					shadow_end, GFP_KERNEL,
+					PAGE_KERNEL, VM_NO_GUARD,
+					pfn_to_nid(mem_data->start_pfn),
+					__builtin_return_address(0));
+		if (!ret)
+			return NOTIFY_BAD;
+
+		kmemleak_ignore(ret);
+		return NOTIFY_OK;
+	}
+	case MEM_OFFLINE:
+		vfree((void *)shadow_start);
+	}
+
+	return NOTIFY_OK;
+}
+
+static int __init kasan_memhotplug_init(void)
+{
+	hotplug_memory_notifier(kasan_mem_notifier, 0);
+
+	return 0;
+}
+
+module_init(kasan_memhotplug_init);
+#endif
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index d8cb63bd1ecc..d026286de750 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -1,5 +1,6 @@
 /*
- * This file contains shadow memory manipulation code.
+ * This file contains core KASAN code, including shadow memory manipulation
+ * code, implementation of KASAN hooks and compiler inserted callbacks, etc.
  *
  * Copyright (c) 2014 Samsung Electronics Co., Ltd.
  * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
@@ -40,21 +41,11 @@
 #include "kasan.h"
 #include "../slab.h"
 
-void kasan_enable_current(void)
-{
-	current->kasan_depth++;
-}
-
-void kasan_disable_current(void)
-{
-	current->kasan_depth--;
-}
-
 /*
  * Poisons the shadow memory for 'size' bytes starting from 'addr'.
  * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
  */
-static void kasan_poison_shadow(const void *address, size_t size, u8 value)
+void kasan_poison_shadow(const void *address, size_t size, u8 value)
 {
 	void *shadow_start, *shadow_end;
 
@@ -74,48 +65,6 @@ void kasan_unpoison_shadow(const void *address, size_t size)
 	}
 }
 
-static void __kasan_unpoison_stack(struct task_struct *task, const void *sp)
-{
-	void *base = task_stack_page(task);
-	size_t size = sp - base;
-
-	kasan_unpoison_shadow(base, size);
-}
-
-/* Unpoison the entire stack for a task. */
-void kasan_unpoison_task_stack(struct task_struct *task)
-{
-	__kasan_unpoison_stack(task, task_stack_page(task) + THREAD_SIZE);
-}
-
-/* Unpoison the stack for the current task beyond a watermark sp value. */
-asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
-{
-	/*
-	 * Calculate the task stack base address.  Avoid using 'current'
-	 * because this function is called by early resume code which hasn't
-	 * yet set up the percpu register (%gs).
-	 */
-	void *base = (void *)((unsigned long)watermark & ~(THREAD_SIZE - 1));
-
-	kasan_unpoison_shadow(base, watermark - base);
-}
-
-/*
- * Clear all poison for the region between the current SP and a provided
- * watermark value, as is sometimes required prior to hand-crafted asm function
- * returns in the middle of functions.
- */
-void kasan_unpoison_stack_above_sp_to(const void *watermark)
-{
-	const void *sp = __builtin_frame_address(0);
-	size_t size = watermark - sp;
-
-	if (WARN_ON(sp > watermark))
-		return;
-	kasan_unpoison_shadow(sp, size);
-}
-
 /*
  * All functions below always inlined so compiler could
  * perform better optimizations in each of __asan_loadX/__assn_storeX
@@ -260,57 +209,12 @@ static __always_inline void check_memory_region_inline(unsigned long addr,
 	kasan_report(addr, size, write, ret_ip);
 }
 
-static void check_memory_region(unsigned long addr,
-				size_t size, bool write,
+void check_memory_region(unsigned long addr, size_t size, bool write,
 				unsigned long ret_ip)
 {
 	check_memory_region_inline(addr, size, write, ret_ip);
 }
 
-void kasan_check_read(const volatile void *p, unsigned int size)
-{
-	check_memory_region((unsigned long)p, size, false, _RET_IP_);
-}
-EXPORT_SYMBOL(kasan_check_read);
-
-void kasan_check_write(const volatile void *p, unsigned int size)
-{
-	check_memory_region((unsigned long)p, size, true, _RET_IP_);
-}
-EXPORT_SYMBOL(kasan_check_write);
-
-#undef memset
-void *memset(void *addr, int c, size_t len)
-{
-	check_memory_region((unsigned long)addr, len, true, _RET_IP_);
-
-	return __memset(addr, c, len);
-}
-
-#undef memmove
-void *memmove(void *dest, const void *src, size_t len)
-{
-	check_memory_region((unsigned long)src, len, false, _RET_IP_);
-	check_memory_region((unsigned long)dest, len, true, _RET_IP_);
-
-	return __memmove(dest, src, len);
-}
-
-#undef memcpy
-void *memcpy(void *dest, const void *src, size_t len)
-{
-	check_memory_region((unsigned long)src, len, false, _RET_IP_);
-	check_memory_region((unsigned long)dest, len, true, _RET_IP_);
-
-	return __memcpy(dest, src, len);
-}
-
-void kasan_alloc_pages(struct page *page, unsigned int order)
-{
-	if (likely(!PageHighMem(page)))
-		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
-}
-
 void kasan_free_pages(struct page *page, unsigned int order)
 {
 	if (likely(!PageHighMem(page)))
@@ -385,14 +289,6 @@ void kasan_cache_shutdown(struct kmem_cache *cache)
 	quarantine_remove_cache(cache);
 }
 
-size_t kasan_metadata_size(struct kmem_cache *cache)
-{
-	return (cache->kasan_info.alloc_meta_offset ?
-		sizeof(struct kasan_alloc_meta) : 0) +
-		(cache->kasan_info.free_meta_offset ?
-		sizeof(struct kasan_free_meta) : 0);
-}
-
 void kasan_poison_slab(struct page *page)
 {
 	kasan_poison_shadow(page_address(page),
@@ -400,11 +296,6 @@ void kasan_poison_slab(struct page *page)
 			KASAN_KMALLOC_REDZONE);
 }
 
-void kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
-{
-	kasan_unpoison_shadow(object, cache->object_size);
-}
-
 void kasan_poison_object_data(struct kmem_cache *cache, void *object)
 {
 	kasan_poison_shadow(object,
@@ -412,78 +303,6 @@ void kasan_poison_object_data(struct kmem_cache *cache, void *object)
 			KASAN_KMALLOC_REDZONE);
 }
 
-static inline int in_irqentry_text(unsigned long ptr)
-{
-	return (ptr >= (unsigned long)&__irqentry_text_start &&
-		ptr < (unsigned long)&__irqentry_text_end) ||
-		(ptr >= (unsigned long)&__softirqentry_text_start &&
-		 ptr < (unsigned long)&__softirqentry_text_end);
-}
-
-static inline void filter_irq_stacks(struct stack_trace *trace)
-{
-	int i;
-
-	if (!trace->nr_entries)
-		return;
-	for (i = 0; i < trace->nr_entries; i++)
-		if (in_irqentry_text(trace->entries[i])) {
-			/* Include the irqentry function into the stack. */
-			trace->nr_entries = i + 1;
-			break;
-		}
-}
-
-static inline depot_stack_handle_t save_stack(gfp_t flags)
-{
-	unsigned long entries[KASAN_STACK_DEPTH];
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.entries = entries,
-		.max_entries = KASAN_STACK_DEPTH,
-		.skip = 0
-	};
-
-	save_stack_trace(&trace);
-	filter_irq_stacks(&trace);
-	if (trace.nr_entries != 0 &&
-	    trace.entries[trace.nr_entries-1] == ULONG_MAX)
-		trace.nr_entries--;
-
-	return depot_save_stack(&trace, flags);
-}
-
-static inline void set_track(struct kasan_track *track, gfp_t flags)
-{
-	track->pid = current->pid;
-	track->stack = save_stack(flags);
-}
-
-struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
-					const void *object)
-{
-	BUILD_BUG_ON(sizeof(struct kasan_alloc_meta) > 32);
-	return (void *)object + cache->kasan_info.alloc_meta_offset;
-}
-
-struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
-				      const void *object)
-{
-	BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
-	return (void *)object + cache->kasan_info.free_meta_offset;
-}
-
-void kasan_init_slab_obj(struct kmem_cache *cache, const void *object)
-{
-	struct kasan_alloc_meta *alloc_info;
-
-	if (!(cache->flags & SLAB_KASAN))
-		return;
-
-	alloc_info = get_alloc_info(cache, object);
-	__memset(alloc_info, 0, sizeof(*alloc_info));
-}
-
 void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
 {
 	return kasan_kmalloc(cache, object, cache->object_size, flags);
@@ -579,21 +398,6 @@ void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
 	return (void *)ptr;
 }
 
-void *kasan_krealloc(const void *object, size_t size, gfp_t flags)
-{
-	struct page *page;
-
-	if (unlikely(object == ZERO_SIZE_PTR))
-		return ZERO_SIZE_PTR;
-
-	page = virt_to_head_page(object);
-
-	if (unlikely(!PageSlab(page)))
-		return kasan_kmalloc_large(object, size, flags);
-	else
-		return kasan_kmalloc(page->slab_cache, object, size, flags);
-}
-
 void kasan_poison_kfree(void *ptr, unsigned long ip)
 {
 	struct page *page;
@@ -619,40 +423,6 @@ void kasan_kfree_large(void *ptr, unsigned long ip)
 	/* The object will be poisoned by page_alloc. */
 }
 
-int kasan_module_alloc(void *addr, size_t size)
-{
-	void *ret;
-	size_t shadow_size;
-	unsigned long shadow_start;
-
-	shadow_start = (unsigned long)kasan_mem_to_shadow(addr);
-	shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT,
-			PAGE_SIZE);
-
-	if (WARN_ON(!PAGE_ALIGNED(shadow_start)))
-		return -EINVAL;
-
-	ret = __vmalloc_node_range(shadow_size, 1, shadow_start,
-			shadow_start + shadow_size,
-			GFP_KERNEL | __GFP_ZERO,
-			PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE,
-			__builtin_return_address(0));
-
-	if (ret) {
-		find_vm_area(addr)->flags |= VM_KASAN;
-		kmemleak_ignore(ret);
-		return 0;
-	}
-
-	return -ENOMEM;
-}
-
-void kasan_free_shadow(const struct vm_struct *vm)
-{
-	if (vm->flags & VM_KASAN)
-		vfree(kasan_mem_to_shadow(vm->addr));
-}
-
 static void register_global(struct kasan_global *global)
 {
 	size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE);
@@ -793,53 +563,3 @@ DEFINE_ASAN_SET_SHADOW(f2);
 DEFINE_ASAN_SET_SHADOW(f3);
 DEFINE_ASAN_SET_SHADOW(f5);
 DEFINE_ASAN_SET_SHADOW(f8);
-
-#ifdef CONFIG_MEMORY_HOTPLUG
-static int __meminit kasan_mem_notifier(struct notifier_block *nb,
-			unsigned long action, void *data)
-{
-	struct memory_notify *mem_data = data;
-	unsigned long nr_shadow_pages, start_kaddr, shadow_start;
-	unsigned long shadow_end, shadow_size;
-
-	nr_shadow_pages = mem_data->nr_pages >> KASAN_SHADOW_SCALE_SHIFT;
-	start_kaddr = (unsigned long)pfn_to_kaddr(mem_data->start_pfn);
-	shadow_start = (unsigned long)kasan_mem_to_shadow((void *)start_kaddr);
-	shadow_size = nr_shadow_pages << PAGE_SHIFT;
-	shadow_end = shadow_start + shadow_size;
-
-	if (WARN_ON(mem_data->nr_pages % KASAN_SHADOW_SCALE_SIZE) ||
-		WARN_ON(start_kaddr % (KASAN_SHADOW_SCALE_SIZE << PAGE_SHIFT)))
-		return NOTIFY_BAD;
-
-	switch (action) {
-	case MEM_GOING_ONLINE: {
-		void *ret;
-
-		ret = __vmalloc_node_range(shadow_size, PAGE_SIZE, shadow_start,
-					shadow_end, GFP_KERNEL,
-					PAGE_KERNEL, VM_NO_GUARD,
-					pfn_to_nid(mem_data->start_pfn),
-					__builtin_return_address(0));
-		if (!ret)
-			return NOTIFY_BAD;
-
-		kmemleak_ignore(ret);
-		return NOTIFY_OK;
-	}
-	case MEM_OFFLINE:
-		vfree((void *)shadow_start);
-	}
-
-	return NOTIFY_OK;
-}
-
-static int __init kasan_memhotplug_init(void)
-{
-	hotplug_memory_notifier(kasan_mem_notifier, 0);
-
-	return 0;
-}
-
-module_init(kasan_memhotplug_init);
-#endif
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index c12dcfde2ebd..2be31754278e 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -94,6 +94,7 @@ struct kasan_free_meta {
 	struct qlist_node quarantine_link;
 };
 
+void set_track(struct kasan_track *track, gfp_t flags);
 struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
 					const void *object);
 struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
@@ -105,6 +106,9 @@ static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
 		<< KASAN_SHADOW_SCALE_SHIFT);
 }
 
+void check_memory_region(unsigned long addr, size_t size, bool write,
+				unsigned long ret_ip);
+
 void kasan_report(unsigned long addr, size_t size,
 		bool is_write, unsigned long ip);
 void kasan_report_invalid_free(void *object, unsigned long ip);
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 03/14] khwasan: add CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
  2018-03-02 19:44 ` [RFC PATCH 01/14] khwasan: change kasan hooks signatures Andrey Konovalov
  2018-03-02 19:44 ` [RFC PATCH 02/14] khwasan: move common kasan and khwasan code to common.c Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-02 19:44 ` [RFC PATCH 04/14] khwasan: adjust shadow size for CONFIG_KASAN_TAGS Andrey Konovalov
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

This commit splits the current CONFIG_KASAN config option into two:
1. CONFIG_KASAN_CLASSIC, that enables the classic KASAN version (the one
   that exists now);
2. CONFIG_KASAN_TAGS, that enables KHWASAN.

With CONFIG_KASAN_TAGS enabled, compiler options are changed to instrument
kernel files wiht -fsantize=hwaddress (except the ones for which
KASAN_SANITIZE := n is set).

Both CONFIG_KASAN_CLASSIC and CONFIG_KASAN_CLASSIC support both
CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes.

This commit also adds empty placeholder (for now) KHWASAN implementation
of KASAN hooks (which KHWASAN reuses) and placeholder implementation
of KHWASAN specific hooks inserted by the compiler.
---
 arch/arm64/Kconfig             |   1 +
 include/linux/compiler-clang.h |   7 +-
 include/linux/compiler-gcc.h   |   4 ++
 include/linux/compiler.h       |   3 +-
 include/linux/kasan.h          |  16 +++--
 lib/Kconfig.kasan              |  68 +++++++++++++-----
 mm/kasan/Makefile              |   6 +-
 mm/kasan/khwasan.c             | 127 +++++++++++++++++++++++++++++++++
 mm/slub.c                      |   2 +-
 scripts/Makefile.kasan         |  32 ++++++++-
 10 files changed, 241 insertions(+), 25 deletions(-)
 create mode 100644 mm/kasan/khwasan.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7381eeb7ef8e..759871510f87 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -88,6 +88,7 @@ config ARM64
 	select HAVE_ARCH_HUGE_VMAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
+	select HAVE_ARCH_KASAN_TAGS if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
index d3f264a5b04d..16e49f6b6645 100644
--- a/include/linux/compiler-clang.h
+++ b/include/linux/compiler-clang.h
@@ -24,10 +24,15 @@
 #define KASAN_ABI_VERSION 5
 
 /* emulate gcc's __SANITIZE_ADDRESS__ flag */
-#if __has_feature(address_sanitizer)
+#if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer)
 #define __SANITIZE_ADDRESS__
 #endif
 
+#ifdef CONFIG_KASAN_TAGS
+#undef __no_sanitize_hwaddress
+#define __no_sanitize_hwaddress __attribute__((no_sanitize("hwaddress")))
+#endif
+
 /* Clang doesn't have a way to turn it off per-function, yet. */
 #ifdef __noretpoline
 #undef __noretpoline
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index e2c7f4369eff..e9bc985c1227 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -344,6 +344,10 @@
 #define __no_sanitize_address
 #endif
 
+#if !defined(__no_sanitize_hwaddress)
+#define __no_sanitize_hwaddress	/* gcc doesn't support KHWASAN */
+#endif
+
 /*
  * A trick to suppress uninitialized variable warning without generating any
  * code
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index ab4711c63601..6142bae513e8 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -195,7 +195,8 @@ void __read_once_size(const volatile void *p, void *res, int size)
  * 	https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368
  * '__maybe_unused' allows us to avoid defined-but-not-used warnings.
  */
-# define __no_kasan_or_inline __no_sanitize_address __maybe_unused
+# define __no_kasan_or_inline __no_sanitize_address __no_sanitize_hwaddress \
+			      __maybe_unused
 #else
 # define __no_kasan_or_inline __always_inline
 #endif
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 3bfebcf7ad2b..3c45e273a936 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -45,8 +45,6 @@ void kasan_free_pages(struct page *page, unsigned int order);
 
 void kasan_cache_create(struct kmem_cache *cache, size_t *size,
 			slab_flags_t *flags);
-void kasan_cache_shrink(struct kmem_cache *cache);
-void kasan_cache_shutdown(struct kmem_cache *cache);
 
 void kasan_poison_slab(struct page *page);
 void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
@@ -94,8 +92,6 @@ static inline void kasan_free_pages(struct page *page, unsigned int order) {}
 static inline void kasan_cache_create(struct kmem_cache *cache,
 				      size_t *size,
 				      slab_flags_t *flags) {}
-static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
-static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
 
 static inline void kasan_poison_slab(struct page *page) {}
 static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
@@ -141,4 +137,16 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
 
 #endif /* CONFIG_KASAN */
 
+#ifdef CONFIG_KASAN_CLASSIC
+
+void kasan_cache_shrink(struct kmem_cache *cache);
+void kasan_cache_shutdown(struct kmem_cache *cache);
+
+#else /* CONFIG_KASAN_CLASSIC */
+
+static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
+static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
+
+#endif /* CONFIG_KASAN_CLASSIC */
+
 #endif /* LINUX_KASAN_H */
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 3d35d062970d..ab34e7d7d3a7 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -1,33 +1,69 @@
 config HAVE_ARCH_KASAN
 	bool
 
+config HAVE_ARCH_KASAN_TAGS
+	bool
+
 if HAVE_ARCH_KASAN
 
 config KASAN
-	bool "KASan: runtime memory debugger"
+	bool "KASAN: runtime memory debugger"
+	help
+	  Enables KASAN (KernelAddressSANitizer) - runtime memory debugger,
+	  designed to find out-of-bounds accesses and use-after-free bugs.
+	  KASAN has two modes: KASAN (a classic version, similar to userspace
+	  ASan, enabled with CONFIG_KASAN_CLASSIC) and KHWASAN (a version
+	  based on pointer tagging, only for arm64, similar to userspace
+	  HWASan, enabled with CONFIG_KASAN_TAGS).
+
+choice
+	prompt "KASAN mode"
+	depends on KASAN
+	default KASAN_CLASSIC
+
+config KASAN_CLASSIC
+	bool "KASAN: the classic mode"
 	depends on SLUB || (SLAB && !DEBUG_SLAB)
 	select CONSTRUCTORS
 	select STACKDEPOT
 	help
-	  Enables kernel address sanitizer - runtime memory debugger,
-	  designed to find out-of-bounds accesses and use-after-free bugs.
-	  This is strictly a debugging feature and it requires a gcc version
-	  of 4.9.2 or later. Detection of out of bounds accesses to stack or
-	  global variables requires gcc 5.0 or later.
+	  Enables the classic mode of KASAN.
+	  This is strictly a debugging feature and it requires a GCC version
+	  of 4.9.2 or later. Detection of out-of-bounds accesses to stack or
+	  global variables requires GCC 5.0 or later.
 	  This feature consumes about 1/8 of available memory and brings about
 	  ~x3 performance slowdown.
 	  For better error detection enable CONFIG_STACKTRACE.
-	  Currently CONFIG_KASAN doesn't work with CONFIG_DEBUG_SLAB
+	  Currently CONFIG_KASAN_CLASSIC doesn't work with CONFIG_DEBUG_SLAB
 	  (the resulting kernel does not boot).
 
+if HAVE_ARCH_KASAN_TAGS
+
+config KASAN_TAGS
+	bool "KHWASAN: the tagged pointers mode"
+	depends on SLUB || (SLAB && !DEBUG_SLAB)
+	select CONSTRUCTORS
+	select STACKDEPOT
+	help
+	  Enabled KHWASAN (KASAN mode based on pointer tagging).
+	  This mode requires Top Byte Ignore support by the CPU and therefore
+	  only supported for arm64.
+	  TODO: clang version, slowdown, memory usage
+	  For better error detection enable CONFIG_STACKTRACE.
+	  Currently CONFIG_KASAN_TAGS doesn't work with CONFIG_DEBUG_SLAB
+	  (the resulting kernel does not boot).
+
+endif
+
+endchoice
+
 config KASAN_EXTRA
-	bool "KAsan: extra checks"
-	depends on KASAN && DEBUG_KERNEL && !COMPILE_TEST
+	bool "KASAN: extra checks"
+	depends on KASAN_CLASSIC && DEBUG_KERNEL && !COMPILE_TEST
 	help
-	  This enables further checks in the kernel address sanitizer, for now
-	  it only includes the address-use-after-scope check that can lead
-	  to excessive kernel stack usage, frame size warnings and longer
-	  compile time.
+	  This enables further checks in KASAN, for now it only includes the
+	  address-use-after-scope check that can lead to excessive kernel
+	  stack usage, frame size warnings and longer compile time.
 	  https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 has more
 
 
@@ -52,16 +88,16 @@ config KASAN_INLINE
 	  memory accesses. This is faster than outline (in some workloads
 	  it gives about x2 boost over outline instrumentation), but
 	  make kernel's .text size much bigger.
-	  This requires a gcc version of 5.0 or later.
+	  For CONFIG_KASAN_CLASSIC this requires GCC 5.0 or later.
 
 endchoice
 
 config TEST_KASAN
-	tristate "Module for testing kasan for bug detection"
+	tristate "Module for testing KASAN for bug detection"
 	depends on m && KASAN
 	help
 	  This is a test module doing various nasty things like
 	  out of bounds accesses, use after free. It is useful for testing
-	  kernel debugging features like kernel address sanitizer.
+	  kernel debugging features like KASAN.
 
 endif
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
index a6df14bffb6b..d930575e6d55 100644
--- a/mm/kasan/Makefile
+++ b/mm/kasan/Makefile
@@ -2,6 +2,7 @@
 KASAN_SANITIZE := n
 UBSAN_SANITIZE_common.o := n
 UBSAN_SANITIZE_kasan.o := n
+UBSAN_SANITIZE_khwasan.o := n
 KCOV_INSTRUMENT := n
 
 CFLAGS_REMOVE_kasan.o = -pg
@@ -10,5 +11,8 @@ CFLAGS_REMOVE_kasan.o = -pg
 
 CFLAGS_common.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
 CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+CFLAGS_khwasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
 
-obj-y := common.o kasan.o report.o kasan_init.o quarantine.o
+obj-$(CONFIG_KASAN) := common.o kasan_init.o report.o
+obj-$(CONFIG_KASAN_CLASSIC) += kasan.o quarantine.o
+obj-$(CONFIG_KASAN_TAGS) += khwasan.o
diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
new file mode 100644
index 000000000000..24d75245e9d0
--- /dev/null
+++ b/mm/kasan/khwasan.c
@@ -0,0 +1,127 @@
+/*
+ * This file contains core KHWASAN code, including shadow memory manipulation
+ * code, implementation of KHWASAN hooks and compiler inserted callbacks, etc.
+ *
+ * Copyright (c) 2018 Google, Inc.
+ * Author: Andrey Konovalov <andreyknvl@google.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define DISABLE_BRANCH_PROFILING
+
+#include <linux/export.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/kasan.h>
+#include <linux/kernel.h>
+#include <linux/kmemleak.h>
+#include <linux/linkage.h>
+#include <linux/memblock.h>
+#include <linux/memory.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/printk.h>
+#include <linux/random.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/vmalloc.h>
+#include <linux/bug.h>
+
+#include "kasan.h"
+#include "../slab.h"
+
+void kasan_unpoison_shadow(const void *address, size_t size)
+{
+}
+
+void check_memory_region(unsigned long addr, size_t size, bool write,
+				unsigned long ret_ip)
+{
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+}
+
+void kasan_cache_create(struct kmem_cache *cache, size_t *size,
+		slab_flags_t *flags)
+{
+}
+
+void kasan_poison_slab(struct page *page)
+{
+}
+
+void kasan_poison_object_data(struct kmem_cache *cache, void *object)
+{
+}
+
+void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
+{
+	return object;
+}
+
+bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
+{
+	return false;
+}
+
+void *kasan_kmalloc(struct kmem_cache *cache, const void *object,
+			size_t size, gfp_t flags)
+{
+	return (void *)object;
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
+{
+	return (void *)ptr;
+}
+
+void kasan_poison_kfree(void *ptr, unsigned long ip)
+{
+}
+
+void kasan_kfree_large(void *ptr, unsigned long ip)
+{
+}
+
+#define DEFINE_HWASAN_LOAD_STORE(size)					\
+	void __hwasan_load##size##_noabort(unsigned long addr)		\
+	{								\
+	}								\
+	EXPORT_SYMBOL(__hwasan_load##size##_noabort);			\
+	void __hwasan_store##size##_noabort(unsigned long addr)		\
+	{								\
+	}								\
+	EXPORT_SYMBOL(__hwasan_store##size##_noabort)
+
+DEFINE_HWASAN_LOAD_STORE(1);
+DEFINE_HWASAN_LOAD_STORE(2);
+DEFINE_HWASAN_LOAD_STORE(4);
+DEFINE_HWASAN_LOAD_STORE(8);
+DEFINE_HWASAN_LOAD_STORE(16);
+
+void __hwasan_loadN_noabort(unsigned long addr, unsigned long size)
+{
+}
+EXPORT_SYMBOL(__hwasan_loadN_noabort);
+
+void __hwasan_storeN_noabort(unsigned long addr, unsigned long size)
+{
+}
+EXPORT_SYMBOL(__hwasan_storeN_noabort);
+
+void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size)
+{
+}
+EXPORT_SYMBOL(__hwasan_tag_memory);
diff --git a/mm/slub.c b/mm/slub.c
index 4a856512f225..a00bf24d668e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2983,7 +2983,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page,
 		do_slab_free(s, page, head, tail, cnt, addr);
 }
 
-#ifdef CONFIG_KASAN
+#ifdef CONFIG_KASAN_CLASSIC
 void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr)
 {
 	do_slab_free(cache, virt_to_head_page(x), x, NULL, 1, addr);
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 69552a39951d..7661ee46ee15 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: GPL-2.0
-ifdef CONFIG_KASAN
+ifdef CONFIG_KASAN_CLASSIC
 ifdef CONFIG_KASAN_INLINE
 	call_threshold := 10000
 else
@@ -45,3 +45,33 @@ endif
 CFLAGS_KASAN_NOSANITIZE := -fno-builtin
 
 endif
+
+ifdef CONFIG_KASAN_TAGS
+
+ifdef CONFIG_KASAN_INLINE
+    instrumentation_flags := -mllvm -hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET)
+else
+    instrumentation_flags := -mllvm -hwasan-instrument-with-calls=1
+endif
+
+CFLAGS_KASAN_MINIMAL := -fsanitize=hwaddress
+
+# TODO: implement in clang and use -fsanitize=kernel-hwaddress
+# TODO: fix stack intrumentation and remove -hwasan-instrument-stack=0
+
+ifeq ($(call cc-option, $(CFLAGS_KASAN_MINIMAL) -Werror),)
+    ifneq ($(CONFIG_COMPILE_TEST),y)
+        $(warning Cannot use CONFIG_KASAN_TAGS: \
+            -fsanitize=hwaddress is not supported by compiler)
+    endif
+else
+    CFLAGS_KASAN := $(call cc-option, -fsanitize=hwaddress \
+        -mllvm -hwasan-kernel=1 \
+        -mllvm -hwasan-instrument-stack=0 \
+        -mllvm -hwasan-recover=1 \
+        $(instrumentation_flags))
+endif
+
+CFLAGS_KASAN_NOSANITIZE := -fno-builtin
+
+endif
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 04/14] khwasan: adjust shadow size for CONFIG_KASAN_TAGS
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
                   ` (2 preceding siblings ...)
  2018-03-02 19:44 ` [RFC PATCH 03/14] khwasan: add CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-02 19:44 ` [RFC PATCH 05/14] khwasan: initialize shadow to 0xff Andrey Konovalov
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

KWHASAN uses 1 shadow byte for 16 bytes of kernel memory, so it requires
1/16th of the kernel virtual address space for the shadow memory.

This commit sets KASAN_SHADOW_SCALE_SHIFT to 4 when KHWASAN is enabled.
---
 arch/arm64/Makefile             |  2 +-
 arch/arm64/include/asm/memory.h | 13 +++++++++----
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 4bb18aee4846..23e9fe816cb4 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -100,7 +100,7 @@ endif
 # KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
 #				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
 # in 32-bit arithmetic
-KASAN_SHADOW_SCALE_SHIFT := 3
+KASAN_SHADOW_SCALE_SHIFT := $(if $(CONFIG_KASAN_TAGS), 4, 3)
 KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
 	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \
 	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 50fa96a49792..febd54ff3354 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -80,12 +80,17 @@
 #define KERNEL_END        _end
 
 /*
- * KASAN requires 1/8th of the kernel virtual address space for the shadow
- * region. KASAN can bloat the stack significantly, so double the (minimum)
- * stack size when KASAN is in use.
+ * KASAN and KHWASAN require 1/8th and 1/16th of the kernel virtual address
+ * space for the shadow region respectively. They can bloat the stack
+ * significantly, so double the (minimum) stack size when they are in use.
  */
-#ifdef CONFIG_KASAN
+#ifdef CONFIG_KASAN_CLASSIC
 #define KASAN_SHADOW_SCALE_SHIFT 3
+#endif
+#ifdef CONFIG_KASAN_TAGS
+#define KASAN_SHADOW_SCALE_SHIFT 4
+#endif
+#ifdef CONFIG_KASAN
 #define KASAN_SHADOW_SIZE	(UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
 #define KASAN_THREAD_SHIFT	1
 #else
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 05/14] khwasan: initialize shadow to 0xff
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
                   ` (3 preceding siblings ...)
  2018-03-02 19:44 ` [RFC PATCH 04/14] khwasan: adjust shadow size for CONFIG_KASAN_TAGS Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-02 21:55   ` Evgenii Stepanov
  2018-03-02 19:44 ` [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel Andrey Konovalov
                   ` (9 subsequent siblings)
  14 siblings, 1 reply; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

A KHWASAN shadow memory cell contains a memory tag, that corresponds to
the tag in the top byte of the pointer, that points to that memory. The
native top byte value of kernel pointers is 0xff, so with KHWASAN we
need to initialize shadow memory to 0xff. This commit does that.
---
 arch/arm64/mm/kasan_init.c | 11 ++++++++++-
 include/linux/kasan.h      |  8 ++++++++
 mm/kasan/common.c          |  7 +++++++
 3 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index dabfc1ecda3d..d4bceba60010 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -90,6 +90,10 @@ static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
 	do {
 		phys_addr_t page_phys = early ? __pa_symbol(kasan_zero_page)
 					      : kasan_alloc_zeroed_page(node);
+#if KASAN_SHADOW_INIT != 0
+		if (!early)
+			memset(__va(page_phys), KASAN_SHADOW_INIT, PAGE_SIZE);
+#endif
 		next = addr + PAGE_SIZE;
 		set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL));
 	} while (ptep++, addr = next, addr != end && pte_none(READ_ONCE(*ptep)));
@@ -139,6 +143,11 @@ asmlinkage void __init kasan_early_init(void)
 		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
+
+#if KASAN_SHADOW_INIT != 0
+	memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
+#endif
+
 	kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
 			   true);
 }
@@ -235,7 +244,7 @@ void __init kasan_init(void)
 		set_pte(&kasan_zero_pte[i],
 			pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO));
 
-	memset(kasan_zero_page, 0, PAGE_SIZE);
+	memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
 	cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
 
 	/* At this point kasan is fully initialized. Enable error messages */
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 3c45e273a936..c34f413b0eac 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -139,6 +139,8 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
 
 #ifdef CONFIG_KASAN_CLASSIC
 
+#define KASAN_SHADOW_INIT 0
+
 void kasan_cache_shrink(struct kmem_cache *cache);
 void kasan_cache_shutdown(struct kmem_cache *cache);
 
@@ -149,4 +151,10 @@ static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
 
 #endif /* CONFIG_KASAN_CLASSIC */
 
+#ifdef CONFIG_KASAN_TAGS
+
+#define KASAN_SHADOW_INIT 0xff
+
+#endif /* CONFIG_KASAN_TAGS */
+
 #endif /* LINUX_KASAN_H */
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 08f6c8cb9f84..f4ccb9425655 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -253,6 +253,9 @@ int kasan_module_alloc(void *addr, size_t size)
 			__builtin_return_address(0));
 
 	if (ret) {
+#if KASAN_SHADOW_INIT != 0
+		__memset(ret, KASAN_SHADOW_INIT, shadow_size);
+#endif
 		find_vm_area(addr)->flags |= VM_KASAN;
 		kmemleak_ignore(ret);
 		return 0;
@@ -297,6 +300,10 @@ static int __meminit kasan_mem_notifier(struct notifier_block *nb,
 		if (!ret)
 			return NOTIFY_BAD;
 
+#if KASAN_SHADOW_INIT != 0
+		__memset(ret, KASAN_SHADOW_INIT, shadow_end - shadow_start);
+#endif
+
 		kmemleak_ignore(ret);
 		return NOTIFY_OK;
 	}
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
                   ` (4 preceding siblings ...)
  2018-03-02 19:44 ` [RFC PATCH 05/14] khwasan: initialize shadow to 0xff Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-05 14:29   ` Mark Rutland
  2018-03-05 14:36   ` Mark Rutland
  2018-03-02 19:44 ` [RFC PATCH 07/14] khwasan: add tag related helper functions Andrey Konovalov
                   ` (8 subsequent siblings)
  14 siblings, 2 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

KHWASAN uses the Top Byte Ignore feature of arm64 CPUs to store a pointer
tag in the top byte of each pointer. This commit enables the TCR_TBI1 bit,
which enables Top Byte Ignore for the kernel, when KHWASAN is used.
---
 arch/arm64/include/asm/pgtable-hwdef.h | 1 +
 arch/arm64/mm/proc.S                   | 8 +++++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index cdfe3e657a9e..ae6b6405eacc 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -289,6 +289,7 @@
 #define TCR_A1			(UL(1) << 22)
 #define TCR_ASID16		(UL(1) << 36)
 #define TCR_TBI0		(UL(1) << 37)
+#define TCR_TBI1		(UL(1) << 38)
 #define TCR_HA			(UL(1) << 39)
 #define TCR_HD			(UL(1) << 40)
 
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index c0af47617299..b2035cfe7a3a 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -41,6 +41,12 @@
 /* PTWs cacheable, inner/outer WBWA */
 #define TCR_CACHE_FLAGS	TCR_IRGN_WBWA | TCR_ORGN_WBWA
 
+#ifdef CONFIG_KASAN_TAGS
+#define TCR_TBI_FLAGS (TCR_TBI0 | TCR_TBI1)
+#else
+#define TCR_TBI_FLAGS TCR_TBI0
+#endif
+
 #define MAIR(attr, mt)	((attr) << ((mt) * 8))
 
 /*
@@ -432,7 +438,7 @@ ENTRY(__cpu_setup)
 	 * both user and kernel.
 	 */
 	ldr	x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
-			TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0 | TCR_A1
+			TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI_FLAGS | TCR_A1
 	tcr_set_idmap_t0sz	x10, x9
 
 	/*
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 07/14] khwasan: add tag related helper functions
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
                   ` (5 preceding siblings ...)
  2018-03-02 19:44 ` [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-05 14:32   ` Mark Rutland
  2018-03-02 19:44 ` [RFC PATCH 08/14] khwasan: perform untagged pointers comparison in krealloc Andrey Konovalov
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

This commit add a few helper functions, that are meant to be used to
work with tags embedded in the top byte of kernel pointers: to set, to
get or to reset (set to 0xff) the top byte.
---
 arch/arm64/mm/kasan_init.c |  2 ++
 include/linux/kasan.h      | 23 ++++++++++++++++++++++
 mm/kasan/kasan.h           | 23 ++++++++++++++++++++++
 mm/kasan/khwasan.c         | 39 ++++++++++++++++++++++++++++++++++++++
 4 files changed, 87 insertions(+)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d4bceba60010..7fd9aee88069 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -247,6 +247,8 @@ void __init kasan_init(void)
 	memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
 	cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
 
+	khwasan_init();
+
 	/* At this point kasan is fully initialized. Enable error messages */
 	init_task.kasan_depth = 0;
 	pr_info("KernelAddressSanitizer initialized\n");
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index c34f413b0eac..4c656ad5762a 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -155,6 +155,29 @@ static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
 
 #define KASAN_SHADOW_INIT 0xff
 
+void khwasan_init(void);
+
+void *khwasan_set_tag(const void *addr, u8 tag);
+u8 khwasan_get_tag(void *addr);
+void *khwasan_reset_tag(void *ptr);
+
+#else /* CONFIG_KASAN_TAGS */
+
+static inline void khwasan_init(void) { }
+
+static inline void *khwasan_set_tag(const void *addr, u8 tag)
+{
+	return (void *)addr;
+}
+static inline u8 khwasan_get_tag(void *addr)
+{
+	return 0xff;
+}
+static inline void *khwasan_reset_tag(void *ptr)
+{
+	return ptr;
+}
+
 #endif /* CONFIG_KASAN_TAGS */
 
 #endif /* LINUX_KASAN_H */
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 2be31754278e..64459efbd44d 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -113,6 +113,29 @@ void kasan_report(unsigned long addr, size_t size,
 		bool is_write, unsigned long ip);
 void kasan_report_invalid_free(void *object, unsigned long ip);
 
+#define KHWASAN_TAG_SHIFT 56
+#define KHWASAN_TAG_MASK ((u64)0xFF << KHWASAN_TAG_SHIFT)
+
+static inline void *set_tag(const void *addr, u8 tag)
+{
+	u64 a = (u64)addr;
+
+	a &= ~KHWASAN_TAG_MASK;
+	a |= ((u64)tag << KHWASAN_TAG_SHIFT);
+
+	return (void *)a;
+}
+
+static inline u8 get_tag(const void *addr)
+{
+	return (u8)((u64)addr >> KHWASAN_TAG_SHIFT);
+}
+
+static inline void *reset_tag(const void *addr)
+{
+	return set_tag(addr, 0xFF);
+}
+
 #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB)
 void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
 void quarantine_reduce(void);
diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
index 24d75245e9d0..21a2221e3368 100644
--- a/mm/kasan/khwasan.c
+++ b/mm/kasan/khwasan.c
@@ -39,6 +39,45 @@
 #include "kasan.h"
 #include "../slab.h"
 
+int khwasan_enabled;
+
+static DEFINE_PER_CPU(u32, prng_state);
+
+void khwasan_init(void)
+{
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		per_cpu(prng_state, cpu) = get_random_u32();
+	}
+	WRITE_ONCE(khwasan_enabled, 1);
+}
+
+static inline u8 khwasan_random_tag(void)
+{
+	u32 state = this_cpu_read(prng_state);
+
+	state = 1664525 * state + 1013904223;
+	this_cpu_write(prng_state, state);
+
+	return (u8)state;
+}
+
+void *khwasan_set_tag(const void *addr, u8 tag)
+{
+	return set_tag(addr, tag);
+}
+
+u8 khwasan_get_tag(void *addr)
+{
+	return get_tag(addr);
+}
+
+void *khwasan_reset_tag(void *addr)
+{
+	return reset_tag(addr);
+}
+
 void kasan_unpoison_shadow(const void *address, size_t size)
 {
 }
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 08/14] khwasan: perform untagged pointers comparison in krealloc
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
                   ` (6 preceding siblings ...)
  2018-03-02 19:44 ` [RFC PATCH 07/14] khwasan: add tag related helper functions Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-05 14:39   ` Mark Rutland
  2018-03-02 19:44 ` [RFC PATCH 09/14] khwasan: add hooks implementation Andrey Konovalov
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

The krealloc function checks where the same buffer was reused or a new one
allocated by comparing kernel pointers. KHWASAN changes memory tag on the
krealloc'ed chunk of memory and therefore also changes the pointer tag of
the returned pointer. Therefore we need to perform comparison on untagged
(with tags reset) pointers to check whether it's the same memory region or
not.
---
 mm/slab_common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index a33e61315ca6..7c829cbda1a5 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1494,7 +1494,7 @@ void *krealloc(const void *p, size_t new_size, gfp_t flags)
 	}
 
 	ret = __do_krealloc(p, new_size, flags);
-	if (ret && p != ret)
+	if (ret && khwasan_reset_tag((void *)p) != khwasan_reset_tag(ret))
 		kfree(p);
 
 	return ret;
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
                   ` (7 preceding siblings ...)
  2018-03-02 19:44 ` [RFC PATCH 08/14] khwasan: perform untagged pointers comparison in krealloc Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-05 14:44   ` Mark Rutland
                     ` (2 more replies)
  2018-03-02 19:44 ` [RFC PATCH 10/14] khwasan: add bug reporting routines Andrey Konovalov
                   ` (5 subsequent siblings)
  14 siblings, 3 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

This commit adds KHWASAN hooks implementation.

1. When a new slab cache is created, KHWASAN rounds up the size of the
   objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).

2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory,
   that corresponds to this object to this tag, and embeds this tag value
   into the top byte of the returned pointer.

3. On each kfree KHWASAN poisons the shadow memory with a random tag to
   allow detection of use-after-free bugs.

The rest of the logic of the hook implementation is very much similar to
the one provided by KASAN. KHWASAN saves allocation and free stack metadata
to the slab object the same was KASAN does this.
---
 mm/kasan/khwasan.c | 178 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 175 insertions(+), 3 deletions(-)

diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
index 21a2221e3368..09d6f0a72266 100644
--- a/mm/kasan/khwasan.c
+++ b/mm/kasan/khwasan.c
@@ -78,69 +78,238 @@ void *khwasan_reset_tag(void *addr)
 	return reset_tag(addr);
 }
 
+void kasan_poison_shadow(const void *address, size_t size, u8 value)
+{
+	void *shadow_start, *shadow_end;
+
+	/* Perform shadow offset calculation based on untagged address */
+	address = reset_tag((void *)address);
+
+	shadow_start = kasan_mem_to_shadow(address);
+	shadow_end = kasan_mem_to_shadow(address + size);
+
+	memset(shadow_start, value, shadow_end - shadow_start);
+}
+
 void kasan_unpoison_shadow(const void *address, size_t size)
 {
+	/* KHWASAN only allows 16-byte granularity */
+	size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+	kasan_poison_shadow(address, size, get_tag(address));
 }
 
 void check_memory_region(unsigned long addr, size_t size, bool write,
 				unsigned long ret_ip)
 {
+	u8 tag;
+	u8 *shadow_first, *shadow_last, *shadow;
+	void *untagged_addr;
+
+	tag = get_tag((void *)addr);
+	untagged_addr = reset_tag((void *)addr);
+	shadow_first = (u8 *)kasan_mem_to_shadow(untagged_addr);
+	shadow_last = (u8 *)kasan_mem_to_shadow(untagged_addr + size - 1);
+
+	for (shadow = shadow_first; shadow <= shadow_last; shadow++) {
+		if (*shadow != tag) {
+			/* Report invalid-access bug here */
+			return;
+		}
+	}
 }
 
 void kasan_free_pages(struct page *page, unsigned int order)
 {
+	if (likely(!PageHighMem(page)))
+		kasan_poison_shadow(page_address(page),
+				PAGE_SIZE << order,
+				khwasan_random_tag());
 }
 
 void kasan_cache_create(struct kmem_cache *cache, size_t *size,
 		slab_flags_t *flags)
 {
+	int orig_size = *size;
+
+	cache->kasan_info.alloc_meta_offset = *size;
+	*size += sizeof(struct kasan_alloc_meta);
+
+	if (*size % KASAN_SHADOW_SCALE_SIZE != 0)
+		*size = round_up(*size, KASAN_SHADOW_SCALE_SIZE);
+
+
+	if (*size > KMALLOC_MAX_SIZE) {
+		*size = orig_size;
+		return;
+	}
+
+	cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);
+
+	*flags |= SLAB_KASAN;
 }
 
 void kasan_poison_slab(struct page *page)
 {
+	kasan_poison_shadow(page_address(page),
+			PAGE_SIZE << compound_order(page),
+			khwasan_random_tag());
 }
 
 void kasan_poison_object_data(struct kmem_cache *cache, void *object)
 {
+	kasan_poison_shadow(object,
+			round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
+			khwasan_random_tag());
 }
 
 void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
 {
+	if (!READ_ONCE(khwasan_enabled))
+		return object;
+	object = kasan_kmalloc(cache, object, cache->object_size, flags);
+	if (unlikely(cache->ctor)) {
+		// Cache constructor might use object's pointer value to
+		// initialize some of its fields.
+		cache->ctor(object);
+	}
 	return object;
 }
 
-bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
+static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
+				unsigned long ip)
 {
+	u8 shadow_byte;
+	u8 tag;
+	unsigned long rounded_up_size;
+	void *untagged_addr = reset_tag(object);
+
+	if (unlikely(nearest_obj(cache, virt_to_head_page(untagged_addr),
+			untagged_addr) != untagged_addr)) {
+		/* Report invalid-free here */
+		return true;
+	}
+
+	/* RCU slabs could be legally used after free within the RCU period */
+	if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU))
+		return false;
+
+	shadow_byte = READ_ONCE(*(u8 *)kasan_mem_to_shadow(untagged_addr));
+	tag = get_tag(object);
+	if (tag != shadow_byte) {
+		/* Report invalid-free here */
+		return true;
+	}
+
+	rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE);
+	kasan_poison_shadow(object, rounded_up_size, khwasan_random_tag());
+
+	if (unlikely(!(cache->flags & SLAB_KASAN)))
+		return false;
+
+	set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT);
 	return false;
 }
 
+bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
+{
+	return __kasan_slab_free(cache, object, ip);
+}
+
 void *kasan_kmalloc(struct kmem_cache *cache, const void *object,
 			size_t size, gfp_t flags)
 {
-	return (void *)object;
+	unsigned long redzone_start, redzone_end;
+	u8 tag;
+
+	if (!READ_ONCE(khwasan_enabled))
+		return (void *)object;
+
+	if (unlikely(object == NULL))
+		return NULL;
+
+	redzone_start = round_up((unsigned long)(object + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = round_up((unsigned long)(object + cache->object_size),
+				KASAN_SHADOW_SCALE_SIZE);
+
+	tag = khwasan_random_tag();
+	kasan_poison_shadow(object, redzone_start - (unsigned long)object, tag);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		khwasan_random_tag());
+
+	if (cache->flags & SLAB_KASAN)
+		set_track(&get_alloc_info(cache, object)->alloc_track, flags);
+
+	return set_tag((void *)object, tag);
 }
 EXPORT_SYMBOL(kasan_kmalloc);
 
 void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
 {
-	return (void *)ptr;
+	unsigned long redzone_start, redzone_end;
+	u8 tag;
+	struct page *page;
+
+	if (!READ_ONCE(khwasan_enabled))
+		return (void *)ptr;
+
+	if (unlikely(ptr == NULL))
+		return NULL;
+
+	page = virt_to_page(ptr);
+	redzone_start = round_up((unsigned long)(ptr + size),
+				KASAN_SHADOW_SCALE_SIZE);
+	redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+	tag = khwasan_random_tag();
+	kasan_poison_shadow(ptr, redzone_start - (unsigned long)ptr, tag);
+	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+		khwasan_random_tag());
+
+	return set_tag((void *)ptr, tag);
 }
 
 void kasan_poison_kfree(void *ptr, unsigned long ip)
 {
+	struct page *page;
+
+	page = virt_to_head_page(ptr);
+
+	if (unlikely(!PageSlab(page))) {
+		if (reset_tag(ptr) != page_address(page)) {
+			/* Report invalid-free here */
+			return;
+		}
+		kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+					khwasan_random_tag());
+	} else {
+		__kasan_slab_free(page->slab_cache, ptr, ip);
+	}
 }
 
 void kasan_kfree_large(void *ptr, unsigned long ip)
 {
+	struct page *page = virt_to_page(ptr);
+	struct page *head_page = virt_to_head_page(ptr);
+
+	if (reset_tag(ptr) != page_address(head_page)) {
+		/* Report invalid-free here */
+		return;
+	}
+
+	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
+			khwasan_random_tag());
 }
 
 #define DEFINE_HWASAN_LOAD_STORE(size)					\
 	void __hwasan_load##size##_noabort(unsigned long addr)		\
 	{								\
+		check_memory_region(addr, size, false, _RET_IP_);	\
 	}								\
 	EXPORT_SYMBOL(__hwasan_load##size##_noabort);			\
 	void __hwasan_store##size##_noabort(unsigned long addr)		\
 	{								\
+		check_memory_region(addr, size, true, _RET_IP_);	\
 	}								\
 	EXPORT_SYMBOL(__hwasan_store##size##_noabort)
 
@@ -152,15 +321,18 @@ DEFINE_HWASAN_LOAD_STORE(16);
 
 void __hwasan_loadN_noabort(unsigned long addr, unsigned long size)
 {
+	check_memory_region(addr, size, false, _RET_IP_);
 }
 EXPORT_SYMBOL(__hwasan_loadN_noabort);
 
 void __hwasan_storeN_noabort(unsigned long addr, unsigned long size)
 {
+	check_memory_region(addr, size, true, _RET_IP_);
 }
 EXPORT_SYMBOL(__hwasan_storeN_noabort);
 
 void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size)
 {
+	kasan_poison_shadow((void *)addr, size, tag);
 }
 EXPORT_SYMBOL(__hwasan_tag_memory);
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 10/14] khwasan: add bug reporting routines
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
                   ` (8 preceding siblings ...)
  2018-03-02 19:44 ` [RFC PATCH 09/14] khwasan: add hooks implementation Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-02 19:44 ` [RFC PATCH 11/14] khwasan: add brk handler for inline instrumentation Andrey Konovalov
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

This commit adds rountines, that print KHWASAN error reports. Those are
quite similar to KASAN, the difference is:

1. The way KHWASAN finds the first bad shadow cell (with a mismatching
   tag). KHWASAN compares memory tags from the shadow memory to the pointer
   tag.

2. KHWASAN reports all bugs with the "KASAN: invalid-access" header. This
   is done, so various external tools that already parse the kernel logs
   looking for KASAN reports wouldn't need to be changed.
---
 include/linux/kasan.h |  3 ++
 mm/kasan/kasan.h      |  2 +
 mm/kasan/khwasan.c    | 10 ++---
 mm/kasan/report.c     | 88 ++++++++++++++++++++++++++++++++++++++-----
 4 files changed, 89 insertions(+), 14 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 4c656ad5762a..310a092d0a57 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -161,6 +161,9 @@ void *khwasan_set_tag(const void *addr, u8 tag);
 u8 khwasan_get_tag(void *addr);
 void *khwasan_reset_tag(void *ptr);
 
+void khwasan_report(unsigned long addr, size_t size, bool write,
+			unsigned long ip);
+
 #else /* CONFIG_KASAN_TAGS */
 
 static inline void khwasan_init(void) { }
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 64459efbd44d..23da304ea94c 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -136,6 +136,8 @@ static inline void *reset_tag(const void *addr)
 	return set_tag(addr, 0xFF);
 }
 
+void khwasan_report_invalid_free(void *object, unsigned long ip);
+
 #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB)
 void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
 void quarantine_reduce(void);
diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
index 09d6f0a72266..7a95d1cc4243 100644
--- a/mm/kasan/khwasan.c
+++ b/mm/kasan/khwasan.c
@@ -112,7 +112,7 @@ void check_memory_region(unsigned long addr, size_t size, bool write,
 
 	for (shadow = shadow_first; shadow <= shadow_last; shadow++) {
 		if (*shadow != tag) {
-			/* Report invalid-access bug here */
+			khwasan_report(addr, size, write, ret_ip);
 			return;
 		}
 	}
@@ -185,7 +185,7 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
 
 	if (unlikely(nearest_obj(cache, virt_to_head_page(untagged_addr),
 			untagged_addr) != untagged_addr)) {
-		/* Report invalid-free here */
+		khwasan_report_invalid_free(object, ip);
 		return true;
 	}
 
@@ -196,7 +196,7 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
 	shadow_byte = READ_ONCE(*(u8 *)kasan_mem_to_shadow(untagged_addr));
 	tag = get_tag(object);
 	if (tag != shadow_byte) {
-		/* Report invalid-free here */
+		khwasan_report_invalid_free(object, ip);
 		return true;
 	}
 
@@ -277,7 +277,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
 
 	if (unlikely(!PageSlab(page))) {
 		if (reset_tag(ptr) != page_address(page)) {
-			/* Report invalid-free here */
+			khwasan_report_invalid_free(ptr, ip);
 			return;
 		}
 		kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
@@ -293,7 +293,7 @@ void kasan_kfree_large(void *ptr, unsigned long ip)
 	struct page *head_page = virt_to_head_page(ptr);
 
 	if (reset_tag(ptr) != page_address(head_page)) {
-		/* Report invalid-free here */
+		khwasan_report_invalid_free(ptr, ip);
 		return;
 	}
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 5c169aa688fd..ed17168a083e 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -51,10 +51,9 @@ static const void *find_first_bad_addr(const void *addr, size_t size)
 	return first_bad_addr;
 }
 
-static bool addr_has_shadow(struct kasan_access_info *info)
+static bool addr_has_shadow(const void *addr)
 {
-	return (info->access_addr >=
-		kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
+	return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
 }
 
 static const char *get_shadow_bug_type(struct kasan_access_info *info)
@@ -127,15 +126,14 @@ static const char *get_wild_bug_type(struct kasan_access_info *info)
 
 static const char *get_bug_type(struct kasan_access_info *info)
 {
-	if (addr_has_shadow(info))
+	if (addr_has_shadow(info->access_addr))
 		return get_shadow_bug_type(info);
 	return get_wild_bug_type(info);
 }
 
-static void print_error_description(struct kasan_access_info *info)
+static void print_error_description(struct kasan_access_info *info,
+					const char *bug_type)
 {
-	const char *bug_type = get_bug_type(info);
-
 	pr_err("BUG: KASAN: %s in %pS\n",
 		bug_type, (void *)info->ip);
 	pr_err("%s of size %zu at addr %px by task %s/%d\n",
@@ -345,10 +343,10 @@ static void kasan_report_error(struct kasan_access_info *info)
 
 	kasan_start_report(&flags);
 
-	print_error_description(info);
+	print_error_description(info, get_bug_type(info));
 	pr_err("\n");
 
-	if (!addr_has_shadow(info)) {
+	if (!addr_has_shadow(info->access_addr)) {
 		dump_stack();
 	} else {
 		print_address_description((void *)info->access_addr);
@@ -412,6 +410,78 @@ void kasan_report(unsigned long addr, size_t size,
 	kasan_report_error(&info);
 }
 
+static inline void khwasan_print_tags(const void *addr)
+{
+	u8 addr_tag = get_tag(addr);
+	void *untagged_addr = reset_tag(addr);
+	u8 *shadow = (u8 *)kasan_mem_to_shadow(untagged_addr);
+
+	pr_err("Pointer tag: [%02x], memory tag: [%02x]\n", addr_tag, *shadow);
+}
+
+static const void *khwasan_find_first_bad_addr(const void *addr, size_t size)
+{
+	u8 tag = get_tag((void *)addr);
+	void *untagged_addr = reset_tag((void *)addr);
+	u8 *shadow = (u8 *)kasan_mem_to_shadow(untagged_addr);
+	const void *first_bad_addr = untagged_addr;
+
+	while (*shadow == tag && first_bad_addr < untagged_addr + size) {
+		first_bad_addr += KASAN_SHADOW_SCALE_SIZE;
+		shadow = (u8 *)kasan_mem_to_shadow(first_bad_addr);
+	}
+	return first_bad_addr;
+}
+
+void khwasan_report(unsigned long addr, size_t size, bool write,
+			unsigned long ip)
+{
+	struct kasan_access_info info;
+	unsigned long flags;
+	void *untagged_addr = reset_tag((void *)addr);
+
+	if (likely(!kasan_report_enabled()))
+		return;
+
+	disable_trace_on_warning();
+
+	info.access_addr = (void *)addr;
+	info.first_bad_addr = khwasan_find_first_bad_addr((void *)addr, size);
+	info.access_size = size;
+	info.is_write = write;
+	info.ip = ip;
+
+	kasan_start_report(&flags);
+
+	print_error_description(&info, "invalid-access");
+	khwasan_print_tags((void *)addr);
+	pr_err("\n");
+
+	if (!addr_has_shadow(untagged_addr)) {
+		dump_stack();
+	} else {
+		print_address_description(untagged_addr);
+		pr_err("\n");
+		print_shadow_for_address(info.first_bad_addr);
+	}
+
+	kasan_end_report(&flags);
+}
+
+void khwasan_report_invalid_free(void *object, unsigned long ip)
+{
+	unsigned long flags;
+	void *untagged_addr = reset_tag((void *)object);
+
+	kasan_start_report(&flags);
+	pr_err("BUG: KASAN: double-free or invalid-free in %pS\n", (void *)ip);
+	khwasan_print_tags(object);
+	pr_err("\n");
+	print_address_description(untagged_addr);
+	pr_err("\n");
+	print_shadow_for_address(untagged_addr);
+	kasan_end_report(&flags);
+}
 
 #define DEFINE_ASAN_REPORT_LOAD(size)                     \
 void __asan_report_load##size##_noabort(unsigned long addr) \
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 11/14] khwasan: add brk handler for inline instrumentation
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
                   ` (9 preceding siblings ...)
  2018-03-02 19:44 ` [RFC PATCH 10/14] khwasan: add bug reporting routines Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-05 14:51   ` Mark Rutland
  2018-03-02 19:44 ` [RFC PATCH 12/14] khwasan, jbd2: add khwasan annotations Andrey Konovalov
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

KHWASAN inline instrumentation mode (which embeds checks of shadow memory
into the generated code, instead of inserting a callback) generates a brk
instruction when a tag mismatch is detected.

This commit add a KHWASAN brk handler, that decodes the immediate value
passed to the brk instructions (to extract information about the memory
access that triggered the mismatch), reads the register values (x0 contains
the guilty address) and reports the bug.
---
 arch/arm64/include/asm/brk-imm.h |  2 ++
 arch/arm64/kernel/traps.c        | 40 ++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+)

diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h
index ed693c5bcec0..e4a7013321dc 100644
--- a/arch/arm64/include/asm/brk-imm.h
+++ b/arch/arm64/include/asm/brk-imm.h
@@ -16,10 +16,12 @@
  * 0x400: for dynamic BRK instruction
  * 0x401: for compile time BRK instruction
  * 0x800: kernel-mode BUG() and WARN() traps
+ * 0x9xx: KHWASAN trap (allowed values 0x900 - 0x9ff)
  */
 #define FAULT_BRK_IMM			0x100
 #define KGDB_DYN_DBG_BRK_IMM		0x400
 #define KGDB_COMPILED_DBG_BRK_IMM	0x401
 #define BUG_BRK_IMM			0x800
+#define KHWASAN_BRK_IMM			0x900
 
 #endif
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index eb2d15147e8d..5df8cdf5af13 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -35,6 +35,7 @@
 #include <linux/sizes.h>
 #include <linux/syscalls.h>
 #include <linux/mm_types.h>
+#include <linux/kasan.h>
 
 #include <asm/atomic.h>
 #include <asm/bug.h>
@@ -771,6 +772,38 @@ static struct break_hook bug_break_hook = {
 	.fn = bug_handler,
 };
 
+#ifdef CONFIG_KASAN_TAGS
+static int khwasan_handler(struct pt_regs *regs, unsigned int esr)
+{
+	bool recover = esr & 0x20;
+	bool write = esr & 0x10;
+	size_t size = 1 << (esr & 0xf);
+	u64 addr = regs->regs[0];
+	u64 pc = regs->pc;
+
+	if (user_mode(regs))
+		return DBG_HOOK_ERROR;
+
+	khwasan_report(addr, size, write, pc);
+
+	if (!recover)
+		die("Oops - KHWASAN", regs, 0);
+
+	/* If thread survives, skip over the BUG instruction and continue: */
+	arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
+	return DBG_HOOK_HANDLED;
+}
+
+#define KHWASAN_ESR_VAL (0xf2000000 | KHWASAN_BRK_IMM)
+#define KHWASAN_ESR_MASK 0xffffff00
+
+static struct break_hook khwasan_break_hook = {
+	.esr_val = KHWASAN_ESR_VAL,
+	.esr_mask = KHWASAN_ESR_MASK,
+	.fn = khwasan_handler,
+};
+#endif
+
 /*
  * Initial handler for AArch64 BRK exceptions
  * This handler only used until debug_traps_init().
@@ -778,6 +811,10 @@ static struct break_hook bug_break_hook = {
 int __init early_brk64(unsigned long addr, unsigned int esr,
 		struct pt_regs *regs)
 {
+#ifdef CONFIG_KASAN_TAGS
+	if ((esr & KHWASAN_ESR_MASK) == KHWASAN_ESR_VAL)
+		return khwasan_handler(regs, esr) != DBG_HOOK_HANDLED;
+#endif
 	return bug_handler(regs, esr) != DBG_HOOK_HANDLED;
 }
 
@@ -785,4 +822,7 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
 void __init trap_init(void)
 {
 	register_break_hook(&bug_break_hook);
+#ifdef CONFIG_KASAN_TAGS
+	register_break_hook(&khwasan_break_hook);
+#endif
 }
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 12/14] khwasan, jbd2: add khwasan annotations
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
                   ` (10 preceding siblings ...)
  2018-03-02 19:44 ` [RFC PATCH 11/14] khwasan: add brk handler for inline instrumentation Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-02 19:44 ` [RFC PATCH 13/14] khwasan: update kasan documentation Andrey Konovalov
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

This patch it not meant to be accepted as is, but I'm including it to
illustrate the case where using the top byte of kernel pointers causes
issues with the current code.

What happens here, is jbd2/journal.c code was written to account for archs
that don't keep high memory mapped all the time, but rather map and unmap
particular pages when needed. Instead of storing a pointer to the kernel
memory, journal code saves the address of the page structure and offset
within that page for later use. Those pages are then mapped and unmapped
with kmap/kunmap when necessary and virt_to_page is used to get the virtual
address of the page. For arm64 (that keeps the high memory mapped all the
time), kmap is turned into a page_address call.

The issue is that with use of the page_address + virt_to_page sequence
the top byte value of the original pointer gets lost. Right now this is
fixed by simply adding annotations to the code, that fix up the top byte
values, but a more generic solution will probably be needed.
---
 fs/jbd2/journal.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
index 3fbf48ec2188..8b65d2c49b61 100644
--- a/fs/jbd2/journal.c
+++ b/fs/jbd2/journal.c
@@ -365,6 +365,7 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction,
 	unsigned int new_offset;
 	struct buffer_head *bh_in = jh2bh(jh_in);
 	journal_t *journal = transaction->t_journal;
+	u8 new_page_tag = 0xff;
 
 	/*
 	 * The buffer really shouldn't be locked: only the current committing
@@ -392,12 +393,14 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction,
 		done_copy_out = 1;
 		new_page = virt_to_page(jh_in->b_frozen_data);
 		new_offset = offset_in_page(jh_in->b_frozen_data);
+		new_page_tag = khwasan_get_tag(jh_in->b_frozen_data);
 	} else {
 		new_page = jh2bh(jh_in)->b_page;
 		new_offset = offset_in_page(jh2bh(jh_in)->b_data);
 	}
 
 	mapped_data = kmap_atomic(new_page);
+	mapped_data = khwasan_set_tag(mapped_data, new_page_tag);
 	/*
 	 * Fire data frozen trigger if data already wasn't frozen.  Do this
 	 * before checking for escaping, as the trigger may modify the magic
@@ -438,10 +441,12 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction,
 
 		jh_in->b_frozen_data = tmp;
 		mapped_data = kmap_atomic(new_page);
+		mapped_data = khwasan_set_tag(mapped_data, new_page_tag);
 		memcpy(tmp, mapped_data + new_offset, bh_in->b_size);
 		kunmap_atomic(mapped_data);
 
 		new_page = virt_to_page(tmp);
+		new_page_tag = khwasan_get_tag(tmp);
 		new_offset = offset_in_page(tmp);
 		done_copy_out = 1;
 
@@ -459,6 +464,7 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction,
 	 */
 	if (do_escape) {
 		mapped_data = kmap_atomic(new_page);
+		mapped_data = khwasan_set_tag(mapped_data, new_page_tag);
 		*((unsigned int *)(mapped_data + new_offset)) = 0;
 		kunmap_atomic(mapped_data);
 	}
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 13/14] khwasan: update kasan documentation
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
                   ` (11 preceding siblings ...)
  2018-03-02 19:44 ` [RFC PATCH 12/14] khwasan, jbd2: add khwasan annotations Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-02 19:44 ` [RFC PATCH 14/14] khwasan: default the instrumentation mode to inline Andrey Konovalov
  2018-03-04  9:16 ` [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Geert Uytterhoeven
  14 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

This patch updates KASAN documentation to reflect the addition of KHWASAN.
---
 Documentation/dev-tools/kasan.rst | 212 +++++++++++++++++-------------
 1 file changed, 122 insertions(+), 90 deletions(-)

diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index f7a18f274357..a817f4c4285c 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -8,11 +8,18 @@ KernelAddressSANitizer (KASAN) is a dynamic memory error detector. It provides
 a fast and comprehensive solution for finding use-after-free and out-of-bounds
 bugs.
 
-KASAN uses compile-time instrumentation for checking every memory access,
-therefore you will need a GCC version 4.9.2 or later. GCC 5.0 or later is
-required for detection of out-of-bounds accesses to stack or global variables.
+KASAN has two modes: classic KASAN (a classic version, similar to user space
+ASan) and KHWASAN (a version based on memory tagging, similar to user space
+HWASan).
 
-Currently KASAN is supported only for the x86_64 and arm64 architectures.
+KASAN uses compile-time instrumentation to insert validity checks before every
+memory access, and therefore requires a compiler version that supports that.
+For classic KASAN you need GCC version 4.9.2 or later. GCC 5.0 or later is
+required for detection of out-of-bounds accesses on stack and global variables.
+TODO: compiler requirements for KHWASAN
+
+Currently classic KASAN is supported for the x86_64, arm64 and xtensa
+architectures, and KHWASAN is supported only for arm64.
 
 Usage
 -----
@@ -21,12 +28,14 @@ To enable KASAN configure kernel with::
 
 	  CONFIG_KASAN = y
 
-and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and
-inline are compiler instrumentation types. The former produces smaller binary
-the latter is 1.1 - 2 times faster. Inline instrumentation requires a GCC
+and choose between CONFIG_KASAN_CLASSIC (to enable classic KASAN) and
+CONFIG_KASAN_TAGS (to enabled KHWASAN). You also need to choose choose between
+CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and inline are compiler
+instrumentation types. The former produces smaller binary the latter is
+1.1 - 2 times faster. For classic KASAN inline instrumentation requires GCC
 version 5.0 or later.
 
-KASAN works with both SLUB and SLAB memory allocators.
+Both KASAN modes work with both SLUB and SLAB memory allocators.
 For better bug detection and nicer reporting, enable CONFIG_STACKTRACE.
 
 To disable instrumentation for specific files or directories, add a line
@@ -43,85 +52,80 @@ similar to the following to the respective kernel Makefile:
 Error reports
 ~~~~~~~~~~~~~
 
-A typical out of bounds access report looks like this::
+A typical out-of-bounds access classic KASAN report looks like this::
 
     ==================================================================
-    BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
-    Write of size 1 by task modprobe/1689
-    =============================================================================
-    BUG kmalloc-128 (Not tainted): kasan error
-    -----------------------------------------------------------------------------
-
-    Disabling lock debugging due to kernel taint
-    INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
-     __slab_alloc+0x4b4/0x4f0
-     kmem_cache_alloc_trace+0x10b/0x190
-     kmalloc_oob_right+0x3d/0x75 [test_kasan]
-     init_module+0x9/0x47 [test_kasan]
-     do_one_initcall+0x99/0x200
-     load_module+0x2cb3/0x3b20
-     SyS_finit_module+0x76/0x80
-     system_call_fastpath+0x12/0x17
-    INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
-    INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
-
-    Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
-    Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
-    Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
-    Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
-    Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
-    Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
-    Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
-    Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
-    Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk.
-    Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc                          ........
-    Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
-    CPU: 0 PID: 1689 Comm: modprobe Tainted: G    B          3.18.0-rc1-mm1+ #98
-    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
-     ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
-     ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
-     ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+    BUG: KASAN: slab-out-of-bounds in kmalloc_oob_right+0xa8/0xbc [test_kasan]
+    Write of size 1 at addr ffff8800696f3d3b by task insmod/2734
+    
+    CPU: 0 PID: 2734 Comm: insmod Not tainted 4.15.0+ #98
+    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
     Call Trace:
-     [<ffffffff81cc68ae>] dump_stack+0x46/0x58
-     [<ffffffff811fd848>] print_trailer+0xf8/0x160
-     [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
-     [<ffffffff811ff0f5>] object_err+0x35/0x40
-     [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
-     [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
-     [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
-     [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
-     [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
-     [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
-     [<ffffffff8120a995>] __asan_store1+0x75/0xb0
-     [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
-     [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
-     [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
-     [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
-     [<ffffffff810002d9>] do_one_initcall+0x99/0x200
-     [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
-     [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
-     [<ffffffff8110fd70>] ? m_show+0x240/0x240
-     [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
-     [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+     __dump_stack lib/dump_stack.c:17
+     dump_stack+0x83/0xbc lib/dump_stack.c:53
+     print_address_description+0x73/0x280 mm/kasan/report.c:254
+     kasan_report_error mm/kasan/report.c:352
+     kasan_report+0x10e/0x220 mm/kasan/report.c:410
+     __asan_report_store1_noabort+0x17/0x20 mm/kasan/report.c:505
+     kmalloc_oob_right+0xa8/0xbc [test_kasan] lib/test_kasan.c:42
+     kmalloc_tests_init+0x16/0x769 [test_kasan]
+     do_one_initcall+0x9e/0x240 init/main.c:832
+     do_init_module+0x1b6/0x542 kernel/module.c:3462
+     load_module+0x6042/0x9030 kernel/module.c:3786
+     SYSC_init_module+0x18f/0x1c0 kernel/module.c:3858
+     SyS_init_module+0x9/0x10 kernel/module.c:3841
+     do_syscall_64+0x198/0x480 arch/x86/entry/common.c:287
+     entry_SYSCALL_64_after_hwframe+0x21/0x86 arch/x86/entry/entry_64.S:251
+    RIP: 0033:0x7fdd79df99da
+    RSP: 002b:00007fff2229bdf8 EFLAGS: 00000202 ORIG_RAX: 00000000000000af
+    RAX: ffffffffffffffda RBX: 000055c408121190 RCX: 00007fdd79df99da
+    RDX: 00007fdd7a0b8f88 RSI: 0000000000055670 RDI: 00007fdd7a47e000
+    RBP: 000055c4081200b0 R08: 0000000000000003 R09: 0000000000000000
+    R10: 00007fdd79df5d0a R11: 0000000000000202 R12: 00007fdd7a0b8f88
+    R13: 000055c408120090 R14: 0000000000000000 R15: 0000000000000000
+    
+    Allocated by task 2734:
+     save_stack+0x43/0xd0 mm/kasan/common.c:176
+     set_track+0x20/0x30 mm/kasan/common.c:188
+     kasan_kmalloc+0x9a/0xc0 mm/kasan/kasan.c:372
+     kmem_cache_alloc_trace+0xcd/0x1a0 mm/slub.c:2761
+     kmalloc ./include/linux/slab.h:512
+     kmalloc_oob_right+0x56/0xbc [test_kasan] lib/test_kasan.c:36
+     kmalloc_tests_init+0x16/0x769 [test_kasan]
+     do_one_initcall+0x9e/0x240 init/main.c:832
+     do_init_module+0x1b6/0x542 kernel/module.c:3462
+     load_module+0x6042/0x9030 kernel/module.c:3786
+     SYSC_init_module+0x18f/0x1c0 kernel/module.c:3858
+     SyS_init_module+0x9/0x10 kernel/module.c:3841
+     do_syscall_64+0x198/0x480 arch/x86/entry/common.c:287
+     entry_SYSCALL_64_after_hwframe+0x21/0x86 arch/x86/entry/entry_64.S:251
+    
+    The buggy address belongs to the object at ffff8800696f3cc0
+     which belongs to the cache kmalloc-128 of size 128
+    The buggy address is located 123 bytes inside of
+     128-byte region [ffff8800696f3cc0, ffff8800696f3d40)
+    The buggy address belongs to the page:
+    page:ffffea0001a5bcc0 count:1 mapcount:0 mapping:          (null) index:0x0
+    flags: 0x100000000000100(slab)
+    raw: 0100000000000100 0000000000000000 0000000000000000 0000000180150015
+    raw: ffffea0001a8ce40 0000000300000003 ffff88006d001640 0000000000000000
+    page dumped because: kasan: bad access detected
+    
     Memory state around the buggy address:
-     ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
-     ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
-     ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
-     ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
-     ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
-    >ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
-                                                 ^
-     ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
-     ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
-     ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
-     ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
-     ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+     ffff8800696f3c00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+     ffff8800696f3c80: fc fc fc fc fc fc fc fc 00 00 00 00 00 00 00 00
+    >ffff8800696f3d00: 00 00 00 00 00 00 00 03 fc fc fc fc fc fc fc fc
+                                            ^
+     ffff8800696f3d80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc fc
+     ffff8800696f3e00: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
     ==================================================================
 
-The header of the report discribe what kind of bug happened and what kind of
-access caused it. It's followed by the description of the accessed slub object
-(see 'SLUB Debug output' section in Documentation/vm/slub.txt for details) and
-the description of the accessed memory page.
+The header of the report provides a short summary of what kind of bug happened
+and what kind of access caused it. It's followed by a stack trace of the bad
+access, a stack trace of where the accessed memory was allocated (in case bad
+access happens on a slab object), and a stack trace of where the object was
+freed (in case of a use-after-free bug report). Next comes a description of
+the accessed slab object and information about the accessed memory page.
 
 In the last section the report shows memory state around the accessed address.
 Reading this part requires some understanding of how KASAN works.
@@ -138,18 +142,24 @@ inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
 In the report above the arrows point to the shadow byte 03, which means that
 the accessed address is partially accessible.
 
+For KHWASAN this last report section shows the memory tags around the accessed
+address (see Implementation details section).
+
 
 Implementation details
 ----------------------
 
+Classic KASAN
+~~~~~~~~~~~~~
+
 From a high level, our approach to memory error detection is similar to that
 of kmemcheck: use shadow memory to record whether each byte of memory is safe
-to access, and use compile-time instrumentation to check shadow memory on each
-memory access.
+to access, and use compile-time instrumentation to insert checks of shadow
+memory on each memory access.
 
-AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
-(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
-offset to translate a memory address to its corresponding shadow address.
+Classic KASAN dedicates 1/8th of kernel memory to its shadow memory (e.g. 16TB
+to cover 128TB on x86_64) and uses direct mapping with a scale and offset to
+translate a memory address to its corresponding shadow address.
 
 Here is the function which translates an address to its corresponding shadow
 address::
@@ -162,12 +172,34 @@ address::
 
 where ``KASAN_SHADOW_SCALE_SHIFT = 3``.
 
-Compile-time instrumentation used for checking memory accesses. Compiler inserts
-function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
-access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
-valid or not by checking corresponding shadow memory.
+Compile-time instrumentation is used to insert memory accesses checks. Compiler
+inserts function calls (__asan_load*(addr), __asan_store*(addr)) before each
+memory access of size 1, 2, 4, 8 or 16. These functions check whether memory
+access is valid or not by checking corresponding shadow memory.
 
 GCC 5.0 has possibility to perform inline instrumentation. Instead of making
 function calls GCC directly inserts the code to check the shadow memory.
 This option significantly enlarges kernel but it gives x1.1-x2 performance
 boost over outline instrumented kernel.
+
+KHWASAN
+~~~~~~~
+
+KHWASAN uses the Top Byte Ignore (TBI) feature of modern arm64 CPUs to store
+a pointer tag in the top byte of kernel pointers. KHWASAN also uses shadow
+memory to store memory tags associated with each 16-byte memory cell (therefore
+it dedicates 1/16th of the kernel memory for shadow memory).
+
+On each memory allocation KHWASAN generates a random tag, tags allocated memory
+with this tag, and embeds this tag into the returned pointer. KHWASAN uses
+compile-time instrumentation to insert checks before each memory access. These
+checks make sure that tag of the memory that is being accessed is equal to tag
+ofthe pointer that is used to access this memory. In case of a tag mismatch
+KHWASAN prints a bug report.
+
+KHWASAN also has two instrumentation modes (outline, that emits callbacks to
+check memory accesses; and inline, that performs the shadow memory checks
+inline). With outline instrumentation mode, a bug report is simply printed
+from the function that performs the access check. With inline instrumentation
+a brk instruction is emitted by the compiler, and a dedicated brk handler is
+used to print KHWASAN reports.
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 14/14] khwasan: default the instrumentation mode to inline
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
                   ` (12 preceding siblings ...)
  2018-03-02 19:44 ` [RFC PATCH 13/14] khwasan: update kasan documentation Andrey Konovalov
@ 2018-03-02 19:44 ` Andrey Konovalov
  2018-03-05 14:54   ` Mark Rutland
  2018-03-13 14:44   ` Alexander Potapenko
  2018-03-04  9:16 ` [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Geert Uytterhoeven
  14 siblings, 2 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-02 19:44 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand
  Cc: Andrey Konovalov

There are two reasons to use outline instrumentation:
1. Outline instrumentation reduces the size of the kernel text, and should
   be used where this size matters.
2. Outline instrumentation is less invasive and can be used for debugging
   for KASAN developers, when it's not clear whether some issue is caused
   by KASAN or by something else.

For the rest cases inline instrumentation is preferrable, since it's
faster.

This patch changes the default instrumentation mode to inline.
---
 lib/Kconfig.kasan | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index ab34e7d7d3a7..8ea6ae26b4a3 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -70,7 +70,7 @@ config KASAN_EXTRA
 choice
 	prompt "Instrumentation type"
 	depends on KASAN
-	default KASAN_OUTLINE
+	default KASAN_INLINE
 
 config KASAN_OUTLINE
 	bool "Outline instrumentation"
-- 
2.16.2.395.g2e18187dfd-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 05/14] khwasan: initialize shadow to 0xff
  2018-03-02 19:44 ` [RFC PATCH 05/14] khwasan: initialize shadow to 0xff Andrey Konovalov
@ 2018-03-02 21:55   ` Evgenii Stepanov
  0 siblings, 0 replies; 65+ messages in thread
From: Evgenii Stepanov @ 2018-03-02 21:55 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan,
	Kees Cook, Jann Horn, Mark Brand

If this memset has noticeable performance/memory impact, we could
treat memory tags as bitwise negation of pointer tags, and then shadow
would be initialized to 0 instead of 0xff.

On Fri, Mar 2, 2018 at 11:44 AM, Andrey Konovalov <andreyknvl@google.com> wrote:
> A KHWASAN shadow memory cell contains a memory tag, that corresponds to
> the tag in the top byte of the pointer, that points to that memory. The
> native top byte value of kernel pointers is 0xff, so with KHWASAN we
> need to initialize shadow memory to 0xff. This commit does that.
> ---
>  arch/arm64/mm/kasan_init.c | 11 ++++++++++-
>  include/linux/kasan.h      |  8 ++++++++
>  mm/kasan/common.c          |  7 +++++++
>  3 files changed, 25 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index dabfc1ecda3d..d4bceba60010 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -90,6 +90,10 @@ static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
>         do {
>                 phys_addr_t page_phys = early ? __pa_symbol(kasan_zero_page)
>                                               : kasan_alloc_zeroed_page(node);
> +#if KASAN_SHADOW_INIT != 0
> +               if (!early)
> +                       memset(__va(page_phys), KASAN_SHADOW_INIT, PAGE_SIZE);
> +#endif
>                 next = addr + PAGE_SIZE;
>                 set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL));
>         } while (ptep++, addr = next, addr != end && pte_none(READ_ONCE(*ptep)));
> @@ -139,6 +143,11 @@ asmlinkage void __init kasan_early_init(void)
>                 KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
>         BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
>         BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
> +
> +#if KASAN_SHADOW_INIT != 0
> +       memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
> +#endif
> +
>         kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
>                            true);
>  }
> @@ -235,7 +244,7 @@ void __init kasan_init(void)
>                 set_pte(&kasan_zero_pte[i],
>                         pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO));
>
> -       memset(kasan_zero_page, 0, PAGE_SIZE);
> +       memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE);
>         cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
>
>         /* At this point kasan is fully initialized. Enable error messages */
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 3c45e273a936..c34f413b0eac 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -139,6 +139,8 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
>
>  #ifdef CONFIG_KASAN_CLASSIC
>
> +#define KASAN_SHADOW_INIT 0
> +
>  void kasan_cache_shrink(struct kmem_cache *cache);
>  void kasan_cache_shutdown(struct kmem_cache *cache);
>
> @@ -149,4 +151,10 @@ static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
>
>  #endif /* CONFIG_KASAN_CLASSIC */
>
> +#ifdef CONFIG_KASAN_TAGS
> +
> +#define KASAN_SHADOW_INIT 0xff
> +
> +#endif /* CONFIG_KASAN_TAGS */
> +
>  #endif /* LINUX_KASAN_H */
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 08f6c8cb9f84..f4ccb9425655 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -253,6 +253,9 @@ int kasan_module_alloc(void *addr, size_t size)
>                         __builtin_return_address(0));
>
>         if (ret) {
> +#if KASAN_SHADOW_INIT != 0
> +               __memset(ret, KASAN_SHADOW_INIT, shadow_size);
> +#endif
>                 find_vm_area(addr)->flags |= VM_KASAN;
>                 kmemleak_ignore(ret);
>                 return 0;
> @@ -297,6 +300,10 @@ static int __meminit kasan_mem_notifier(struct notifier_block *nb,
>                 if (!ret)
>                         return NOTIFY_BAD;
>
> +#if KASAN_SHADOW_INIT != 0
> +               __memset(ret, KASAN_SHADOW_INIT, shadow_end - shadow_start);
> +#endif
> +
>                 kmemleak_ignore(ret);
>                 return NOTIFY_OK;
>         }
> --
> 2.16.2.395.g2e18187dfd-goog
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer
  2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
                   ` (13 preceding siblings ...)
  2018-03-02 19:44 ` [RFC PATCH 14/14] khwasan: default the instrumentation mode to inline Andrey Konovalov
@ 2018-03-04  9:16 ` Geert Uytterhoeven
  2018-03-04 11:44   ` Ingo Molnar
  14 siblings, 1 reply; 65+ messages in thread
From: Geert Uytterhoeven @ 2018-03-04  9:16 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Josh Poimboeuf, Arnd Bergmann, kasan-dev, linux-doc,
	Linux Kernel Mailing List, Linux ARM, linux-ext4, linux-sparse,
	Linux MM, linux-kbuild, Kostya Serebryany, Evgeniy Stepanov,
	Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan,
	Kees Cook, Jann Horn, Mark Brand

Hi Andrey,

On Fri, Mar 2, 2018 at 8:44 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
> This patchset adds a new mode to KASAN, which is called KHWASAN (Kernel
> HardWare assisted Address SANitizer). There's still some work to do and
> there are a few TODOs in the code, so I'm publishing this as a RFC to
> collect some initial feedback.
>
> The plan is to implement HWASan [1] for the kernel with the incentive,
> that it's going to have comparable performance, but in the same time
> consume much less memory, trading that off for somewhat imprecise bug
> detection and being supported only for arm64.
>
> The overall idea of the approach used by KHWASAN is the following:
>
> 1. By using the Top Byte Ignore arm64 CPU feature, we can store pointer
>    tags in the top byte of each kernel pointer.

And for how long will this be OK?

Remembering:
  - AmigaBasic,
  - MacOS,
  - Emacs,
  - ...
They all tried to use the same trick, and did regret...
(AmigaBasic never survived this failure).

"Those who don't know history are doomed to repeat it."

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer
  2018-03-04  9:16 ` [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Geert Uytterhoeven
@ 2018-03-04 11:44   ` Ingo Molnar
  2018-03-04 15:49     ` Geert Uytterhoeven
  0 siblings, 1 reply; 65+ messages in thread
From: Ingo Molnar @ 2018-03-04 11:44 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Jonathan Corbet, Catalin Marinas, Will Deacon,
	Theodore Ts'o, Jan Kara, Christopher Li, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Masahiro Yamada, Michal Marek, Mark Rutland, Ard Biesheuvel,
	Yury Norov, Nick Desaulniers, Marc Zyngier, Bob Picco,
	Suzuki K Poulose, Kristina Martsenko, Punit Agrawal, Dave Martin,
	James Morse, Julien Thierry, Michael Weiser, Steve Capper,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Josh Poimboeuf, Arnd Bergmann, kasan-dev, linux-doc,
	Linux Kernel Mailing List, Linux ARM, linux-ext4, linux-sparse,
	Linux MM, linux-kbuild, Kostya Serebryany, Evgeniy Stepanov,
	Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan,
	Kees Cook, Jann Horn, Mark Brand


* Geert Uytterhoeven <geert@linux-m68k.org> wrote:

> Hi Andrey,
> 
> On Fri, Mar 2, 2018 at 8:44 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
> > This patchset adds a new mode to KASAN, which is called KHWASAN (Kernel
> > HardWare assisted Address SANitizer). There's still some work to do and
> > there are a few TODOs in the code, so I'm publishing this as a RFC to
> > collect some initial feedback.
> >
> > The plan is to implement HWASan [1] for the kernel with the incentive,
> > that it's going to have comparable performance, but in the same time
> > consume much less memory, trading that off for somewhat imprecise bug
> > detection and being supported only for arm64.
> >
> > The overall idea of the approach used by KHWASAN is the following:
> >
> > 1. By using the Top Byte Ignore arm64 CPU feature, we can store pointer
> >    tags in the top byte of each kernel pointer.
> 
> And for how long will this be OK?

Firstly it's not for production kernels, it's a hardware accelerator for an 
intrusive debug feature, so it shouldn't really matter, right?

Secondly, if the top byte is lost and the other 56 bits can still be used that 
gives a virtual memory space of up to 65,536 TB, which should be enough for a few 
years in the arm64 space, right?

> Remembering:
>   - AmigaBasic,
>   - MacOS,
>   - Emacs,
>   - ...
> They all tried to use the same trick, and did regret...
> (AmigaBasic never survived this failure).

The 64-bit address space is really a lot larger, and it's a debug-info feature in 
any case.

Thanks,

	Ingo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer
  2018-03-04 11:44   ` Ingo Molnar
@ 2018-03-04 15:49     ` Geert Uytterhoeven
  2018-03-06 18:21       ` Andrey Konovalov
  0 siblings, 1 reply; 65+ messages in thread
From: Geert Uytterhoeven @ 2018-03-04 15:49 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Jonathan Corbet, Catalin Marinas, Will Deacon,
	Theodore Ts'o, Jan Kara, Christopher Li, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Masahiro Yamada, Michal Marek, Mark Rutland, Ard Biesheuvel,
	Yury Norov, Nick Desaulniers, Marc Zyngier, Bob Picco,
	Suzuki K Poulose, Kristina Martsenko, Punit Agrawal, Dave Martin,
	James Morse, Julien Thierry, Michael Weiser, Steve Capper,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Josh Poimboeuf, Arnd Bergmann, kasan-dev, linux-doc,
	Linux Kernel Mailing List, Linux ARM, linux-ext4, linux-sparse,
	Linux MM, linux-kbuild, Kostya Serebryany, Evgeniy Stepanov,
	Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan,
	Kees Cook, Jann Horn, Mark Brand

Hi Ingo,

On Sun, Mar 4, 2018 at 12:44 PM, Ingo Molnar <mingo@kernel.org> wrote:
> * Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>> On Fri, Mar 2, 2018 at 8:44 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
>> > This patchset adds a new mode to KASAN, which is called KHWASAN (Kernel
>> > HardWare assisted Address SANitizer). There's still some work to do and
>> > there are a few TODOs in the code, so I'm publishing this as a RFC to
>> > collect some initial feedback.
>> >
>> > The plan is to implement HWASan [1] for the kernel with the incentive,
>> > that it's going to have comparable performance, but in the same time
>> > consume much less memory, trading that off for somewhat imprecise bug
>> > detection and being supported only for arm64.
>> >
>> > The overall idea of the approach used by KHWASAN is the following:
>> >
>> > 1. By using the Top Byte Ignore arm64 CPU feature, we can store pointer
>> >    tags in the top byte of each kernel pointer.
>>
>> And for how long will this be OK?
>
> Firstly it's not for production kernels, it's a hardware accelerator for an
> intrusive debug feature, so it shouldn't really matter, right?

Sorry, I didn't know it was a debug feature.

> Secondly, if the top byte is lost and the other 56 bits can still be used that
> gives a virtual memory space of up to 65,536 TB, which should be enough for a few
> years in the arm64 space, right?
>
>> Remembering:
>>   - AmigaBasic,
>>   - MacOS,
>>   - Emacs,
>>   - ...
>> They all tried to use the same trick, and did regret...
>> (AmigaBasic never survived this failure).
>
> The 64-bit address space is really a lot larger, and it's a debug-info feature in
> any case.

So that gives up ca. 25 years, less when considering address randomization.
But as long as it stays a debug feature...

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel
  2018-03-02 19:44 ` [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel Andrey Konovalov
@ 2018-03-05 14:29   ` Mark Rutland
  2018-03-09 18:15     ` Andrey Konovalov
  2018-03-05 14:36   ` Mark Rutland
  1 sibling, 1 reply; 65+ messages in thread
From: Mark Rutland @ 2018-03-05 14:29 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand

On Fri, Mar 02, 2018 at 08:44:25PM +0100, Andrey Konovalov wrote:
> +#ifdef CONFIG_KASAN_TAGS
> +#define TCR_TBI_FLAGS (TCR_TBI0 | TCR_TBI1)
> +#else
> +#define TCR_TBI_FLAGS TCR_TBI0
> +#endif

Rather than pulling TBI0 into this, I think it'd make more sense to
have:

#ifdef CONFIG_KASAN_TAGS
#define KASAN_TCR_FLAGS	TCR_TBI1
#else
#define KASAN_TCR_FLAGS
#endif

> +
>  #define MAIR(attr, mt)	((attr) << ((mt) * 8))
>  
>  /*
> @@ -432,7 +438,7 @@ ENTRY(__cpu_setup)
>  	 * both user and kernel.
>  	 */
>  	ldr	x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
> -			TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0 | TCR_A1
> +			TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI_FLAGS | TCR_A1

... and just append KASAN_TCR_FLAGS to the flags here.

That's roughtly what we do with ENDIAN_SET_EL1 for SCTLR_EL1.

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 07/14] khwasan: add tag related helper functions
  2018-03-02 19:44 ` [RFC PATCH 07/14] khwasan: add tag related helper functions Andrey Konovalov
@ 2018-03-05 14:32   ` Mark Rutland
  2018-03-06 18:31     ` Andrey Konovalov
  0 siblings, 1 reply; 65+ messages in thread
From: Mark Rutland @ 2018-03-05 14:32 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand

On Fri, Mar 02, 2018 at 08:44:26PM +0100, Andrey Konovalov wrote:
> +static DEFINE_PER_CPU(u32, prng_state);
> +
> +void khwasan_init(void)
> +{
> +	int cpu;
> +
> +	for_each_possible_cpu(cpu) {
> +		per_cpu(prng_state, cpu) = get_random_u32();
> +	}
> +	WRITE_ONCE(khwasan_enabled, 1);
> +}
> +
> +static inline u8 khwasan_random_tag(void)
> +{
> +	u32 state = this_cpu_read(prng_state);
> +
> +	state = 1664525 * state + 1013904223;
> +	this_cpu_write(prng_state, state);
> +
> +	return (u8)state;
> +}

Have you considered preemption here? Is the assumption that it happens
sufficiently rarely that cross-contaminating the prng state isn't a
problem?

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel
  2018-03-02 19:44 ` [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel Andrey Konovalov
  2018-03-05 14:29   ` Mark Rutland
@ 2018-03-05 14:36   ` Mark Rutland
  2018-03-06 14:24     ` Marc Zyngier
  2018-03-09 18:17     ` Andrey Konovalov
  1 sibling, 2 replies; 65+ messages in thread
From: Mark Rutland @ 2018-03-05 14:36 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand

On Fri, Mar 02, 2018 at 08:44:25PM +0100, Andrey Konovalov wrote:
> KHWASAN uses the Top Byte Ignore feature of arm64 CPUs to store a pointer
> tag in the top byte of each pointer. This commit enables the TCR_TBI1 bit,
> which enables Top Byte Ignore for the kernel, when KHWASAN is used.
> ---
>  arch/arm64/include/asm/pgtable-hwdef.h | 1 +
>  arch/arm64/mm/proc.S                   | 8 +++++++-
>  2 files changed, 8 insertions(+), 1 deletion(-)

Before it's safe to do this, I also think you'll need to fix up at
least:

* virt_to_phys()

* access_ok()

... and potentially others which assume that bits [63:56] of kernel
addresses are 0xff. For example, bits of the fault handling logic might
need fixups.

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 08/14] khwasan: perform untagged pointers comparison in krealloc
  2018-03-02 19:44 ` [RFC PATCH 08/14] khwasan: perform untagged pointers comparison in krealloc Andrey Konovalov
@ 2018-03-05 14:39   ` Mark Rutland
  2018-03-06 18:33     ` Andrey Konovalov
  0 siblings, 1 reply; 65+ messages in thread
From: Mark Rutland @ 2018-03-05 14:39 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand

On Fri, Mar 02, 2018 at 08:44:27PM +0100, Andrey Konovalov wrote:
> The krealloc function checks where the same buffer was reused or a new one
> allocated by comparing kernel pointers. KHWASAN changes memory tag on the
> krealloc'ed chunk of memory and therefore also changes the pointer tag of
> the returned pointer. Therefore we need to perform comparison on untagged
> (with tags reset) pointers to check whether it's the same memory region or
> not.
> ---
>  mm/slab_common.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index a33e61315ca6..7c829cbda1a5 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -1494,7 +1494,7 @@ void *krealloc(const void *p, size_t new_size, gfp_t flags)
>  	}
>  
>  	ret = __do_krealloc(p, new_size, flags);
> -	if (ret && p != ret)
> +	if (ret && khwasan_reset_tag((void *)p) != khwasan_reset_tag(ret))

Why doesn't khwasan_reset_tag() take a const void *, like
khwasan_set_tag() does? That way, this cast wouldn't be necessary.

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-02 19:44 ` [RFC PATCH 09/14] khwasan: add hooks implementation Andrey Konovalov
@ 2018-03-05 14:44   ` Mark Rutland
  2018-03-06 18:38     ` Andrey Konovalov
  2018-03-13 15:05   ` Alexander Potapenko
  2018-03-20  0:44   ` Anthony Yznaga
  2 siblings, 1 reply; 65+ messages in thread
From: Mark Rutland @ 2018-03-05 14:44 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand

On Fri, Mar 02, 2018 at 08:44:28PM +0100, Andrey Konovalov wrote:
>  void check_memory_region(unsigned long addr, size_t size, bool write,
>  				unsigned long ret_ip)
>  {
> +	u8 tag;
> +	u8 *shadow_first, *shadow_last, *shadow;
> +	void *untagged_addr;
> +
> +	tag = get_tag((void *)addr);

Please make get_tag() take a const void *, then this cast can go.

> +	untagged_addr = reset_tag((void *)addr);

Likewise for reset_tag().

> +	shadow_first = (u8 *)kasan_mem_to_shadow(untagged_addr);
> +	shadow_last = (u8 *)kasan_mem_to_shadow(untagged_addr + size - 1);

I don't think these u8 * casts are necessary, since
kasan_mem_to_shadow() returns a void *.

> +
> +	for (shadow = shadow_first; shadow <= shadow_last; shadow++) {
> +		if (*shadow != tag) {
> +			/* Report invalid-access bug here */
> +			return;

Huh? Should that be a TODO?

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 11/14] khwasan: add brk handler for inline instrumentation
  2018-03-02 19:44 ` [RFC PATCH 11/14] khwasan: add brk handler for inline instrumentation Andrey Konovalov
@ 2018-03-05 14:51   ` Mark Rutland
  2018-03-23 15:59     ` Andrey Konovalov
  0 siblings, 1 reply; 65+ messages in thread
From: Mark Rutland @ 2018-03-05 14:51 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand

On Fri, Mar 02, 2018 at 08:44:30PM +0100, Andrey Konovalov wrote:
> KHWASAN inline instrumentation mode (which embeds checks of shadow memory
> into the generated code, instead of inserting a callback) generates a brk
> instruction when a tag mismatch is detected.

The compiler generates the BRK?

I'm a little worried about the ABI implications of that. So far we've
assumed that for hte kernel-side, the BRK space is completely under our
control.

How much does this save, compared to having a callback?

> This commit add a KHWASAN brk handler, that decodes the immediate value
> passed to the brk instructions (to extract information about the memory
> access that triggered the mismatch), reads the register values (x0 contains
> the guilty address) and reports the bug.
> ---
>  arch/arm64/include/asm/brk-imm.h |  2 ++
>  arch/arm64/kernel/traps.c        | 40 ++++++++++++++++++++++++++++++++
>  2 files changed, 42 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h
> index ed693c5bcec0..e4a7013321dc 100644
> --- a/arch/arm64/include/asm/brk-imm.h
> +++ b/arch/arm64/include/asm/brk-imm.h
> @@ -16,10 +16,12 @@
>   * 0x400: for dynamic BRK instruction
>   * 0x401: for compile time BRK instruction
>   * 0x800: kernel-mode BUG() and WARN() traps
> + * 0x9xx: KHWASAN trap (allowed values 0x900 - 0x9ff)
>   */
>  #define FAULT_BRK_IMM			0x100
>  #define KGDB_DYN_DBG_BRK_IMM		0x400
>  #define KGDB_COMPILED_DBG_BRK_IMM	0x401
>  #define BUG_BRK_IMM			0x800
> +#define KHWASAN_BRK_IMM			0x900
>  
>  #endif
> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index eb2d15147e8d..5df8cdf5af13 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -35,6 +35,7 @@
>  #include <linux/sizes.h>
>  #include <linux/syscalls.h>
>  #include <linux/mm_types.h>
> +#include <linux/kasan.h>
>  
>  #include <asm/atomic.h>
>  #include <asm/bug.h>
> @@ -771,6 +772,38 @@ static struct break_hook bug_break_hook = {
>  	.fn = bug_handler,
>  };
>  
> +#ifdef CONFIG_KASAN_TAGS
> +static int khwasan_handler(struct pt_regs *regs, unsigned int esr)
> +{
> +	bool recover = esr & 0x20;
> +	bool write = esr & 0x10;

Can you please add mnemonics for these, e.g.

#define KHWASAN_ESR_RECOVER		0x20
#define KHWASAN_ESR_WRITE		0x10

> +	size_t size = 1 << (esr & 0xf);

#define KHWASAN_ESR_SIZE_MASK		0x4
#define KHWASAN_ESR_SIZE(esr)	(1 << (esr) & KHWASAN_ESR_SIZE_MASK)

> +	u64 addr = regs->regs[0];
> +	u64 pc = regs->pc;
> +
> +	if (user_mode(regs))
> +		return DBG_HOOK_ERROR;
> +
> +	khwasan_report(addr, size, write, pc);
> +
> +	if (!recover)
> +		die("Oops - KHWASAN", regs, 0);

Could you elaborate on what "recover" means, and why it's up the the
compiler to decide if the kernel should die()?

> +
> +	/* If thread survives, skip over the BUG instruction and continue: */
> +	arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);

This is for fast-forwarding user instruction streams, and isn't correct
to call for kernel faults (as it'll mess up the userspace single step
logic).

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 14/14] khwasan: default the instrumentation mode to inline
  2018-03-02 19:44 ` [RFC PATCH 14/14] khwasan: default the instrumentation mode to inline Andrey Konovalov
@ 2018-03-05 14:54   ` Mark Rutland
  2018-03-09 18:06     ` Andrey Konovalov
  2018-03-13 14:44   ` Alexander Potapenko
  1 sibling, 1 reply; 65+ messages in thread
From: Mark Rutland @ 2018-03-05 14:54 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, linux-kernel, linux-arm-kernel, linux-ext4,
	linux-sparse, linux-mm, linux-kbuild, Kostya Serebryany,
	Evgeniy Stepanov, Lee Smith, Ramana Radhakrishnan, Jacob Bramley,
	Ruben Ayrapetyan, Kees Cook, Jann Horn, Mark Brand

On Fri, Mar 02, 2018 at 08:44:33PM +0100, Andrey Konovalov wrote:
> There are two reasons to use outline instrumentation:
> 1. Outline instrumentation reduces the size of the kernel text, and should
>    be used where this size matters.
> 2. Outline instrumentation is less invasive and can be used for debugging
>    for KASAN developers, when it's not clear whether some issue is caused
>    by KASAN or by something else.
> 
> For the rest cases inline instrumentation is preferrable, since it's
> faster.
> 
> This patch changes the default instrumentation mode to inline.
> ---
>  lib/Kconfig.kasan | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index ab34e7d7d3a7..8ea6ae26b4a3 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -70,7 +70,7 @@ config KASAN_EXTRA
>  choice
>  	prompt "Instrumentation type"
>  	depends on KASAN
> -	default KASAN_OUTLINE
> +	default KASAN_INLINE

Some compilers don't support KASAN_INLINE, but do support KASAN_OUTLINE.
IIRC that includes the latest clang release, but I could be wrong.

If that's the case, changing the default here does not seem ideal.

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel
  2018-03-05 14:36   ` Mark Rutland
@ 2018-03-06 14:24     ` Marc Zyngier
  2018-03-09 18:21       ` Andrey Konovalov
  2018-03-09 18:17     ` Andrey Konovalov
  1 sibling, 1 reply; 65+ messages in thread
From: Marc Zyngier @ 2018-03-06 14:24 UTC (permalink / raw)
  To: Mark Rutland, Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Bob Picco, Suzuki K Poulose, Kristina Martsenko, Punit Agrawal,
	Dave Martin, James Morse, Julien Thierry, Michael Weiser,
	Steve Capper, Ingo Molnar, Thomas Gleixner, Sandipan Das,
	Paul Lawrence, David Woodhouse, Kees Cook, Geert Uytterhoeven,
	Josh Poimboeuf, Arnd Bergmann, kasan-dev, linux-doc,
	linux-kernel, linux-arm-kernel, linux-ext4, linux-sparse,
	linux-mm, linux-kbuild, Kostya Serebryany, Evgeniy Stepanov,
	Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan,
	Kees Cook, Jann Horn, Mark Brand

On 05/03/18 14:36, Mark Rutland wrote:
> On Fri, Mar 02, 2018 at 08:44:25PM +0100, Andrey Konovalov wrote:
>> KHWASAN uses the Top Byte Ignore feature of arm64 CPUs to store a pointer
>> tag in the top byte of each pointer. This commit enables the TCR_TBI1 bit,
>> which enables Top Byte Ignore for the kernel, when KHWASAN is used.
>> ---
>>  arch/arm64/include/asm/pgtable-hwdef.h | 1 +
>>  arch/arm64/mm/proc.S                   | 8 +++++++-
>>  2 files changed, 8 insertions(+), 1 deletion(-)
> 
> Before it's safe to do this, I also think you'll need to fix up at
> least:
> 
> * virt_to_phys()
> 
> * access_ok()
> 
> ... and potentially others which assume that bits [63:56] of kernel
> addresses are 0xff. For example, bits of the fault handling logic might
> need fixups.

Indeed. I have the ugly feeling that KVM (and anything that leaves in a
separate address space) will not be very happy with that change, as it
derives HYP VAs from the kernel VA, and doesn't expect lingering bits.
Nothing that cannot be addressed, but worth keeping in mind.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer
  2018-03-04 15:49     ` Geert Uytterhoeven
@ 2018-03-06 18:21       ` Andrey Konovalov
  0 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-06 18:21 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Ingo Molnar, Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Josh Poimboeuf, Arnd Bergmann, kasan-dev, linux-doc,
	Linux Kernel Mailing List, Linux ARM, linux-ext4, linux-sparse,
	Linux MM, linux-kbuild, Kostya Serebryany, Evgeniy Stepanov,
	Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan,
	Kees Cook, Jann Horn, Mark Brand

On Sun, Mar 4, 2018 at 4:49 PM, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> Hi Ingo,
>
> On Sun, Mar 4, 2018 at 12:44 PM, Ingo Molnar <mingo@kernel.org> wrote:
>> * Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>>> On Fri, Mar 2, 2018 at 8:44 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
>>> >
>>> > The overall idea of the approach used by KHWASAN is the following:
>>> >
>>> > 1. By using the Top Byte Ignore arm64 CPU feature, we can store pointer
>>> >    tags in the top byte of each kernel pointer.
>>>
>>> And for how long will this be OK?
>>
>> Firstly it's not for production kernels, it's a hardware accelerator for an
>> intrusive debug feature, so it shouldn't really matter, right?
>
> Sorry, I didn't know it was a debug feature.

Hi!

Sorry, I'll add a description of what KASAN is in the next revision to
avoid confusion.

Thanks!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 07/14] khwasan: add tag related helper functions
  2018-03-05 14:32   ` Mark Rutland
@ 2018-03-06 18:31     ` Andrey Konovalov
  2018-03-07 18:16       ` Christopher Lameter
  2018-03-08 11:20       ` Mark Rutland
  0 siblings, 2 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-06 18:31 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Mon, Mar 5, 2018 at 3:32 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Fri, Mar 02, 2018 at 08:44:26PM +0100, Andrey Konovalov wrote:
>> +static DEFINE_PER_CPU(u32, prng_state);
>> +
>> +void khwasan_init(void)
>> +{
>> +     int cpu;
>> +
>> +     for_each_possible_cpu(cpu) {
>> +             per_cpu(prng_state, cpu) = get_random_u32();
>> +     }
>> +     WRITE_ONCE(khwasan_enabled, 1);
>> +}
>> +
>> +static inline u8 khwasan_random_tag(void)
>> +{
>> +     u32 state = this_cpu_read(prng_state);
>> +
>> +     state = 1664525 * state + 1013904223;
>> +     this_cpu_write(prng_state, state);
>> +
>> +     return (u8)state;
>> +}
>
> Have you considered preemption here? Is the assumption that it happens
> sufficiently rarely that cross-contaminating the prng state isn't a
> problem?

Hi Mark!

Yes, I have. If a preemption happens between this_cpu_read and
this_cpu_write, the only side effect is that we'll give a few
allocated in different contexts objects the same tag. Sine KHWASAN is
meant to be used a probabilistic bug-detection debug feature, this
doesn't seem to have serious negative impact.

I'll add a comment about this though.

Thanks!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 08/14] khwasan: perform untagged pointers comparison in krealloc
  2018-03-05 14:39   ` Mark Rutland
@ 2018-03-06 18:33     ` Andrey Konovalov
  0 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-06 18:33 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Mon, Mar 5, 2018 at 3:39 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Fri, Mar 02, 2018 at 08:44:27PM +0100, Andrey Konovalov wrote:
>>       ret = __do_krealloc(p, new_size, flags);
>> -     if (ret && p != ret)
>> +     if (ret && khwasan_reset_tag((void *)p) != khwasan_reset_tag(ret))
>
> Why doesn't khwasan_reset_tag() take a const void *, like
> khwasan_set_tag() does? That way, this cast wouldn't be necessary.

Will do, thanks!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-05 14:44   ` Mark Rutland
@ 2018-03-06 18:38     ` Andrey Konovalov
  2018-03-08 11:25       ` Mark Rutland
  0 siblings, 1 reply; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-06 18:38 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Mon, Mar 5, 2018 at 3:44 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Fri, Mar 02, 2018 at 08:44:28PM +0100, Andrey Konovalov wrote:
>>  void check_memory_region(unsigned long addr, size_t size, bool write,
>>                               unsigned long ret_ip)
>>  {
>> +     u8 tag;
>> +     u8 *shadow_first, *shadow_last, *shadow;
>> +     void *untagged_addr;
>> +
>> +     tag = get_tag((void *)addr);
>
> Please make get_tag() take a const void *, then this cast can go.

Will do in v2.

>
>> +     untagged_addr = reset_tag((void *)addr);
>
> Likewise for reset_tag().

Ack.

>
>> +     shadow_first = (u8 *)kasan_mem_to_shadow(untagged_addr);
>> +     shadow_last = (u8 *)kasan_mem_to_shadow(untagged_addr + size - 1);
>
> I don't think these u8 * casts are necessary, since
> kasan_mem_to_shadow() returns a void *.

Ack.

>
>> +
>> +     for (shadow = shadow_first; shadow <= shadow_last; shadow++) {
>> +             if (*shadow != tag) {
>> +                     /* Report invalid-access bug here */
>> +                     return;
>
> Huh? Should that be a TODO?

This is fixed in one of the next commits. I decided to split the main
runtime logic and the reporting parts, so this comment is a
placeholder, which is replaced with the proper error reporting
function call later in the patch series. I can make it a /* TODO:
comment */, if you think that looks better.

>
> Thanks,
> Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 07/14] khwasan: add tag related helper functions
  2018-03-06 18:31     ` Andrey Konovalov
@ 2018-03-07 18:16       ` Christopher Lameter
  2018-03-08  9:09         ` Dmitry Vyukov
  2018-03-08 11:20       ` Mark Rutland
  1 sibling, 1 reply; 65+ messages in thread
From: Christopher Lameter @ 2018-03-07 18:16 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Mark Rutland, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Jonathan Corbet, Catalin Marinas, Will Deacon,
	Theodore Ts'o, Jan Kara, Christopher Li, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand


On Tue, 6 Mar 2018, Andrey Konovalov wrote:

> >> +     u32 state = this_cpu_read(prng_state);
> >> +
> >> +     state = 1664525 * state + 1013904223;
> >> +     this_cpu_write(prng_state, state);
> >
> > Have you considered preemption here? Is the assumption that it happens
> > sufficiently rarely that cross-contaminating the prng state isn't a
> > problem?
>
> Hi Mark!
>
> Yes, I have. If a preemption happens between this_cpu_read and
> this_cpu_write, the only side effect is that we'll give a few
> allocated in different contexts objects the same tag. Sine KHWASAN is
> meant to be used a probabilistic bug-detection debug feature, this
> doesn't seem to have serious negative impact.
>
> I'll add a comment about this though.

You could use this_cpu_cmpxchg here to make it a bit better but it
probably does not matter.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 07/14] khwasan: add tag related helper functions
  2018-03-07 18:16       ` Christopher Lameter
@ 2018-03-08  9:09         ` Dmitry Vyukov
  0 siblings, 0 replies; 65+ messages in thread
From: Dmitry Vyukov @ 2018-03-08  9:09 UTC (permalink / raw)
  To: Christopher Lameter
  Cc: Andrey Konovalov, Mark Rutland, Andrey Ryabinin,
	Alexander Potapenko, Jonathan Corbet, Catalin Marinas,
	Will Deacon, Theodore Ts'o, Jan Kara, Christopher Li,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Masahiro Yamada, Michal Marek, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Wed, Mar 7, 2018 at 7:16 PM, Christopher Lameter <cl@linux.com> wrote:
>
> On Tue, 6 Mar 2018, Andrey Konovalov wrote:
>
>> >> +     u32 state = this_cpu_read(prng_state);
>> >> +
>> >> +     state = 1664525 * state + 1013904223;
>> >> +     this_cpu_write(prng_state, state);
>> >
>> > Have you considered preemption here? Is the assumption that it happens
>> > sufficiently rarely that cross-contaminating the prng state isn't a
>> > problem?
>>
>> Hi Mark!
>>
>> Yes, I have. If a preemption happens between this_cpu_read and
>> this_cpu_write, the only side effect is that we'll give a few
>> allocated in different contexts objects the same tag. Sine KHWASAN is
>> meant to be used a probabilistic bug-detection debug feature, this
>> doesn't seem to have serious negative impact.
>>
>> I'll add a comment about this though.
>
> You could use this_cpu_cmpxchg here to make it a bit better but it
> probably does not matter.

Hi,

The non-atomic RMW sequence is not just "doesn't seem to have serious
negative impact", it in fact has positive effect.
Ideally the tags use strong randomness to prevent any attempts to
predict them during explicit exploit attempts. But strong randomness
is expensive, and we did an intentional trade-off to use a PRNG (may
potentially be revised in future, but for now we don't have enough
info to do it). In this context, interrupts that randomly skew PRNG at
unpredictable points do only good. cmpxchg would also lead to skewing,
but non-atomic sequence allows more non-determinism (and maybe a dash
less expensive?). This probably deserves a comment, though.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 07/14] khwasan: add tag related helper functions
  2018-03-06 18:31     ` Andrey Konovalov
  2018-03-07 18:16       ` Christopher Lameter
@ 2018-03-08 11:20       ` Mark Rutland
  1 sibling, 0 replies; 65+ messages in thread
From: Mark Rutland @ 2018-03-08 11:20 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Tue, Mar 06, 2018 at 07:31:16PM +0100, Andrey Konovalov wrote:
> On Mon, Mar 5, 2018 at 3:32 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Fri, Mar 02, 2018 at 08:44:26PM +0100, Andrey Konovalov wrote:
> >> +static DEFINE_PER_CPU(u32, prng_state);
> >> +
> >> +void khwasan_init(void)
> >> +{
> >> +     int cpu;
> >> +
> >> +     for_each_possible_cpu(cpu) {
> >> +             per_cpu(prng_state, cpu) = get_random_u32();
> >> +     }
> >> +     WRITE_ONCE(khwasan_enabled, 1);
> >> +}
> >> +
> >> +static inline u8 khwasan_random_tag(void)
> >> +{
> >> +     u32 state = this_cpu_read(prng_state);
> >> +
> >> +     state = 1664525 * state + 1013904223;
> >> +     this_cpu_write(prng_state, state);
> >> +
> >> +     return (u8)state;
> >> +}
> >
> > Have you considered preemption here? Is the assumption that it happens
> > sufficiently rarely that cross-contaminating the prng state isn't a
> > problem?
> 
> Hi Mark!
> 
> Yes, I have. If a preemption happens between this_cpu_read and
> this_cpu_write, the only side effect is that we'll give a few
> allocated in different contexts objects the same tag. Sine KHWASAN is
> meant to be used a probabilistic bug-detection debug feature, this
> doesn't seem to have serious negative impact.

Sure, just wanted to check that was the intent.

> I'll add a comment about this though.

That would be great!

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-06 18:38     ` Andrey Konovalov
@ 2018-03-08 11:25       ` Mark Rutland
  2018-03-09 18:10         ` Andrey Konovalov
  0 siblings, 1 reply; 65+ messages in thread
From: Mark Rutland @ 2018-03-08 11:25 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Tue, Mar 06, 2018 at 07:38:08PM +0100, Andrey Konovalov wrote:
> On Mon, Mar 5, 2018 at 3:44 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Fri, Mar 02, 2018 at 08:44:28PM +0100, Andrey Konovalov wrote:
> >> +
> >> +     for (shadow = shadow_first; shadow <= shadow_last; shadow++) {
> >> +             if (*shadow != tag) {
> >> +                     /* Report invalid-access bug here */
> >> +                     return;
> >
> > Huh? Should that be a TODO?
> 
> This is fixed in one of the next commits. I decided to split the main
> runtime logic and the reporting parts, so this comment is a
> placeholder, which is replaced with the proper error reporting
> function call later in the patch series. I can make it a /* TODO:
> comment */, if you think that looks better.

It might be preferable to introdcue the report functions first (i.e.
swap this patch with the next one).

Those will be unused, but since they're not static, you shouldn't get
any build warnings. Then the hooks can call the report functions as soon
as they're introduced.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 14/14] khwasan: default the instrumentation mode to inline
  2018-03-05 14:54   ` Mark Rutland
@ 2018-03-09 18:06     ` Andrey Konovalov
  2018-03-09 19:18       ` Mark Rutland
  0 siblings, 1 reply; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-09 18:06 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Mon, Mar 5, 2018 at 3:54 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Fri, Mar 02, 2018 at 08:44:33PM +0100, Andrey Konovalov wrote:
>> There are two reasons to use outline instrumentation:
>> 1. Outline instrumentation reduces the size of the kernel text, and should
>>    be used where this size matters.
>> 2. Outline instrumentation is less invasive and can be used for debugging
>>    for KASAN developers, when it's not clear whether some issue is caused
>>    by KASAN or by something else.
>>
>> For the rest cases inline instrumentation is preferrable, since it's
>> faster.
>>
>> This patch changes the default instrumentation mode to inline.
>> ---
>>  lib/Kconfig.kasan | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>> index ab34e7d7d3a7..8ea6ae26b4a3 100644
>> --- a/lib/Kconfig.kasan
>> +++ b/lib/Kconfig.kasan
>> @@ -70,7 +70,7 @@ config KASAN_EXTRA
>>  choice
>>       prompt "Instrumentation type"
>>       depends on KASAN
>> -     default KASAN_OUTLINE
>> +     default KASAN_INLINE
>
> Some compilers don't support KASAN_INLINE, but do support KASAN_OUTLINE.
> IIRC that includes the latest clang release, but I could be wrong.
>
> If that's the case, changing the default here does not seem ideal.
>

Hi Mark!

GCC before 5.0 doesn't support KASAN_INLINE, but AFAIU will fallback
to outline instrumentation in this case.

Latest Clang Release doesn't support KASAN_INLINE (although current
trunk does) and falls back to outline instrumentation.

So nothing should break, but people with newer compilers should get
the benefits of using the inline instrumentation by default.

Thanks!

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-08 11:25       ` Mark Rutland
@ 2018-03-09 18:10         ` Andrey Konovalov
  0 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-09 18:10 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Thu, Mar 8, 2018 at 12:25 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Tue, Mar 06, 2018 at 07:38:08PM +0100, Andrey Konovalov wrote:
>> On Mon, Mar 5, 2018 at 3:44 PM, Mark Rutland <mark.rutland@arm.com> wrote:
>> > On Fri, Mar 02, 2018 at 08:44:28PM +0100, Andrey Konovalov wrote:
>> >> +
>> >> +     for (shadow = shadow_first; shadow <= shadow_last; shadow++) {
>> >> +             if (*shadow != tag) {
>> >> +                     /* Report invalid-access bug here */
>> >> +                     return;
>> >
>> > Huh? Should that be a TODO?
>>
>> This is fixed in one of the next commits. I decided to split the main
>> runtime logic and the reporting parts, so this comment is a
>> placeholder, which is replaced with the proper error reporting
>> function call later in the patch series. I can make it a /* TODO:
>> comment */, if you think that looks better.
>
> It might be preferable to introdcue the report functions first (i.e.
> swap this patch with the next one).
>
> Those will be unused, but since they're not static, you shouldn't get
> any build warnings. Then the hooks can call the report functions as soon
> as they're introduced.

Will do, thanks!

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel
  2018-03-05 14:29   ` Mark Rutland
@ 2018-03-09 18:15     ` Andrey Konovalov
  0 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-09 18:15 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Mon, Mar 5, 2018 at 3:29 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Fri, Mar 02, 2018 at 08:44:25PM +0100, Andrey Konovalov wrote:
>> +#ifdef CONFIG_KASAN_TAGS
>> +#define TCR_TBI_FLAGS (TCR_TBI0 | TCR_TBI1)
>> +#else
>> +#define TCR_TBI_FLAGS TCR_TBI0
>> +#endif
>
> Rather than pulling TBI0 into this, I think it'd make more sense to
> have:
>
> #ifdef CONFIG_KASAN_TAGS
> #define KASAN_TCR_FLAGS TCR_TBI1
> #else
> #define KASAN_TCR_FLAGS
> #endif
>
>> +
>>  #define MAIR(attr, mt)       ((attr) << ((mt) * 8))
>>
>>  /*
>> @@ -432,7 +438,7 @@ ENTRY(__cpu_setup)
>>        * both user and kernel.
>>        */
>>       ldr     x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
>> -                     TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0 | TCR_A1
>> +                     TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI_FLAGS | TCR_A1
>
> ... and just append KASAN_TCR_FLAGS to the flags here.
>
> That's roughtly what we do with ENDIAN_SET_EL1 for SCTLR_EL1.
>

OK, will do!

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel
  2018-03-05 14:36   ` Mark Rutland
  2018-03-06 14:24     ` Marc Zyngier
@ 2018-03-09 18:17     ` Andrey Konovalov
  2018-03-09 18:59       ` Mark Rutland
  1 sibling, 1 reply; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-09 18:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Mon, Mar 5, 2018 at 3:36 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Fri, Mar 02, 2018 at 08:44:25PM +0100, Andrey Konovalov wrote:
>> KHWASAN uses the Top Byte Ignore feature of arm64 CPUs to store a pointer
>> tag in the top byte of each pointer. This commit enables the TCR_TBI1 bit,
>> which enables Top Byte Ignore for the kernel, when KHWASAN is used.
>> ---
>>  arch/arm64/include/asm/pgtable-hwdef.h | 1 +
>>  arch/arm64/mm/proc.S                   | 8 +++++++-
>>  2 files changed, 8 insertions(+), 1 deletion(-)
>
> Before it's safe to do this, I also think you'll need to fix up at
> least:
>
> * virt_to_phys()

I've already got some issues with it (the jbd2 patch), so I'll look into this.

>
> * access_ok()

This is used for accessing user addresses, and they are not tagged. Am
I missing something?

>
> ... and potentially others which assume that bits [63:56] of kernel
> addresses are 0xff. For example, bits of the fault handling logic might
> need fixups.

I'll look into this as well.

>
> Thanks,
> Mark.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel
  2018-03-06 14:24     ` Marc Zyngier
@ 2018-03-09 18:21       ` Andrey Konovalov
  2018-03-09 18:32         ` Marc Zyngier
  0 siblings, 1 reply; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-09 18:21 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Mark Rutland, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Jonathan Corbet, Catalin Marinas, Will Deacon,
	Theodore Ts'o, Jan Kara, Christopher Li, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Masahiro Yamada, Michal Marek, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Tue, Mar 6, 2018 at 3:24 PM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On 05/03/18 14:36, Mark Rutland wrote:
>> On Fri, Mar 02, 2018 at 08:44:25PM +0100, Andrey Konovalov wrote:
>>> KHWASAN uses the Top Byte Ignore feature of arm64 CPUs to store a pointer
>>> tag in the top byte of each pointer. This commit enables the TCR_TBI1 bit,
>>> which enables Top Byte Ignore for the kernel, when KHWASAN is used.
>>> ---
>>>  arch/arm64/include/asm/pgtable-hwdef.h | 1 +
>>>  arch/arm64/mm/proc.S                   | 8 +++++++-
>>>  2 files changed, 8 insertions(+), 1 deletion(-)
>>
>> Before it's safe to do this, I also think you'll need to fix up at
>> least:
>>
>> * virt_to_phys()
>>
>> * access_ok()
>>
>> ... and potentially others which assume that bits [63:56] of kernel
>> addresses are 0xff. For example, bits of the fault handling logic might
>> need fixups.
>
> Indeed. I have the ugly feeling that KVM (and anything that leaves in a
> separate address space) will not be very happy with that change, as it
> derives HYP VAs from the kernel VA, and doesn't expect lingering bits.
> Nothing that cannot be addressed, but worth keeping in mind.
>

Hi Marc!

Yes, I would expect there would be issues with KVM. I'll see if I can
figure them out, but I think I'll just add a depends on !KVM or
something like this, and will have to deal with KVM once the main part
is committed.

Thanks!

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel
  2018-03-09 18:21       ` Andrey Konovalov
@ 2018-03-09 18:32         ` Marc Zyngier
  2018-03-09 18:42           ` Andrey Konovalov
  0 siblings, 1 reply; 65+ messages in thread
From: Marc Zyngier @ 2018-03-09 18:32 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Mark Rutland, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Jonathan Corbet, Catalin Marinas, Will Deacon,
	Theodore Ts'o, Jan Kara, Christopher Li, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Masahiro Yamada, Michal Marek, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

Hi Andrey,

On 09/03/18 18:21, Andrey Konovalov wrote:
> On Tue, Mar 6, 2018 at 3:24 PM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>> On 05/03/18 14:36, Mark Rutland wrote:
>>> On Fri, Mar 02, 2018 at 08:44:25PM +0100, Andrey Konovalov wrote:
>>>> KHWASAN uses the Top Byte Ignore feature of arm64 CPUs to store a pointer
>>>> tag in the top byte of each pointer. This commit enables the TCR_TBI1 bit,
>>>> which enables Top Byte Ignore for the kernel, when KHWASAN is used.
>>>> ---
>>>>  arch/arm64/include/asm/pgtable-hwdef.h | 1 +
>>>>  arch/arm64/mm/proc.S                   | 8 +++++++-
>>>>  2 files changed, 8 insertions(+), 1 deletion(-)
>>>
>>> Before it's safe to do this, I also think you'll need to fix up at
>>> least:
>>>
>>> * virt_to_phys()
>>>
>>> * access_ok()
>>>
>>> ... and potentially others which assume that bits [63:56] of kernel
>>> addresses are 0xff. For example, bits of the fault handling logic might
>>> need fixups.
>>
>> Indeed. I have the ugly feeling that KVM (and anything that leaves in a
>> separate address space) will not be very happy with that change, as it
>> derives HYP VAs from the kernel VA, and doesn't expect lingering bits.
>> Nothing that cannot be addressed, but worth keeping in mind.
>>
> 
> Hi Marc!
> 
> Yes, I would expect there would be issues with KVM. I'll see if I can
> figure them out, but I think I'll just add a depends on !KVM or
> something like this, and will have to deal with KVM once the main part
> is committed.
Well, that's not quite how it works. KVM is an integral part of the
kernel, and I don't really want to have to deal with regression (not to
mention that KVM is an essential tool in our testing infrastructure).

You could try and exclude KVM from the instrumentation (which we already
have for invasive things such as KASAN), but I'm afraid that having a
debugging option that conflicts with another essential part of the
kernel is not an option.

I'm happy to help you with that though.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel
  2018-03-09 18:32         ` Marc Zyngier
@ 2018-03-09 18:42           ` Andrey Konovalov
  2018-03-09 19:06             ` Marc Zyngier
  2018-03-09 19:14             ` Mark Rutland
  0 siblings, 2 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-09 18:42 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Mark Rutland, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Jonathan Corbet, Catalin Marinas, Will Deacon,
	Theodore Ts'o, Jan Kara, Christopher Li, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Masahiro Yamada, Michal Marek, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Fri, Mar 9, 2018 at 7:32 PM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> Well, that's not quite how it works. KVM is an integral part of the
> kernel, and I don't really want to have to deal with regression (not to
> mention that KVM is an essential tool in our testing infrastructure).
>
> You could try and exclude KVM from the instrumentation (which we already
> have for invasive things such as KASAN), but I'm afraid that having a
> debugging option that conflicts with another essential part of the
> kernel is not an option.
>
> I'm happy to help you with that though.
>

Hm, KHWASAN instruments the very same parts of the kernel that KASAN
does (it reuses the same flag). I've checked, I actually have
CONFIG_KVM enabled in my test build, however I haven't tried to test
KVM yet. I'm planning to perform extensive fuzzing of the kernel with
syzkaller, so if there's any crashes caused by KHWASAN in kvm code
I'll see them. However if some bugs don't manifest as crashes, that
would be a difficult thing to detect for me.

Thanks!

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel
  2018-03-09 18:17     ` Andrey Konovalov
@ 2018-03-09 18:59       ` Mark Rutland
  0 siblings, 0 replies; 65+ messages in thread
From: Mark Rutland @ 2018-03-09 18:59 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Fri, Mar 09, 2018 at 07:17:14PM +0100, Andrey Konovalov wrote:
> On Mon, Mar 5, 2018 at 3:36 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Fri, Mar 02, 2018 at 08:44:25PM +0100, Andrey Konovalov wrote:
> >> KHWASAN uses the Top Byte Ignore feature of arm64 CPUs to store a pointer
> >> tag in the top byte of each pointer. This commit enables the TCR_TBI1 bit,
> >> which enables Top Byte Ignore for the kernel, when KHWASAN is used.
> >> ---
> >>  arch/arm64/include/asm/pgtable-hwdef.h | 1 +
> >>  arch/arm64/mm/proc.S                   | 8 +++++++-
> >>  2 files changed, 8 insertions(+), 1 deletion(-)
> >
> > Before it's safe to do this, I also think you'll need to fix up at
> > least:

> > * access_ok()
> 
> This is used for accessing user addresses, and they are not tagged. Am
> I missing something?

No, I just confused myself. ;)

I was converned that a kernel address with the top byte clear might
spuriously pass access_ok(), but I was mistaken. Bit 55 of the address
would be set, and this would fall outside of USER_DS (which is
TASK_SIZE_64 - 1).

So access_ok() should be fine as-is.

Sorry for the noise!

Mark.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel
  2018-03-09 18:42           ` Andrey Konovalov
@ 2018-03-09 19:06             ` Marc Zyngier
  2018-03-09 19:16               ` Mark Rutland
  2018-03-09 19:14             ` Mark Rutland
  1 sibling, 1 reply; 65+ messages in thread
From: Marc Zyngier @ 2018-03-09 19:06 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Mark Rutland, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Jonathan Corbet, Catalin Marinas, Will Deacon,
	Theodore Ts'o, Jan Kara, Christopher Li, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Masahiro Yamada, Michal Marek, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On 09/03/18 18:42, Andrey Konovalov wrote:
> On Fri, Mar 9, 2018 at 7:32 PM, Marc Zyngier <marc.zyngier@arm.com> wrote:
>> Well, that's not quite how it works. KVM is an integral part of the
>> kernel, and I don't really want to have to deal with regression (not to
>> mention that KVM is an essential tool in our testing infrastructure).
>>
>> You could try and exclude KVM from the instrumentation (which we already
>> have for invasive things such as KASAN), but I'm afraid that having a
>> debugging option that conflicts with another essential part of the
>> kernel is not an option.
>>
>> I'm happy to help you with that though.
>>
> 
> Hm, KHWASAN instruments the very same parts of the kernel that KASAN
> does (it reuses the same flag). I've checked, I actually have
> CONFIG_KVM enabled in my test build, however I haven't tried to test
> KVM yet. I'm planning to perform extensive fuzzing of the kernel with
> syzkaller, so if there's any crashes caused by KHWASAN in kvm code
> I'll see them. However if some bugs don't manifest as crashes, that
> would be a difficult thing to detect for me.

Well, if something is wrong in KVM, it usually manifests itself
extremely quickly, and takes the whole box with it. I have the ugly
feeling that feeding coloured pointers to KVM is going to be a fun ride
though.

Also, last time I checked Clang couldn't even compile KVM correctly.
Hopefully, things have changed...

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel
  2018-03-09 18:42           ` Andrey Konovalov
  2018-03-09 19:06             ` Marc Zyngier
@ 2018-03-09 19:14             ` Mark Rutland
  1 sibling, 0 replies; 65+ messages in thread
From: Mark Rutland @ 2018-03-09 19:14 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Marc Zyngier, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Jonathan Corbet, Catalin Marinas, Will Deacon,
	Theodore Ts'o, Jan Kara, Christopher Li, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Masahiro Yamada, Michal Marek, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Fri, Mar 09, 2018 at 07:42:19PM +0100, Andrey Konovalov wrote:
> On Fri, Mar 9, 2018 at 7:32 PM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> > Well, that's not quite how it works. KVM is an integral part of the
> > kernel, and I don't really want to have to deal with regression (not to
> > mention that KVM is an essential tool in our testing infrastructure).
> >
> > You could try and exclude KVM from the instrumentation (which we already
> > have for invasive things such as KASAN), but I'm afraid that having a
> > debugging option that conflicts with another essential part of the
> > kernel is not an option.
> 
> Hm, KHWASAN instruments the very same parts of the kernel that KASAN
> does (it reuses the same flag).

Sure, but KASAN doesn't fiddle with the tag in pointers, and the KVM hyp
code relies on EL1/EL2 pointers having a fixed offset from each other
(implicitly relying on addr[63:56] being zero).

We have two aliases of the kernel in two disjoint address spaces:

TTBR0                   TTBR1

                        -SS-KKKK--------    EL1 kernel mappings

----KKKK--------                            EL2 hyp mappings

To convert between the two, we just flip a few high bits of the address.
See kern_hyp_va() in <asm/kvm_mmu.h>.


The EL1 mappings have the KASAN shadow, and kernel. The EL2 mappings
just have the kernel. So long as we don't instrument EL2 code with
KASAN, it's fine for EL1 code to be instrumented.

However, with KHASAN, pointers generated by EL1 will have some arbitrary
tag, and more work needs to be done to convert an address to its EL2
alias.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel
  2018-03-09 19:06             ` Marc Zyngier
@ 2018-03-09 19:16               ` Mark Rutland
  0 siblings, 0 replies; 65+ messages in thread
From: Mark Rutland @ 2018-03-09 19:16 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Jonathan Corbet, Catalin Marinas, Will Deacon,
	Theodore Ts'o, Jan Kara, Christopher Li, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Masahiro Yamada, Michal Marek, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Fri, Mar 09, 2018 at 07:06:01PM +0000, Marc Zyngier wrote:
> On 09/03/18 18:42, Andrey Konovalov wrote:
> > On Fri, Mar 9, 2018 at 7:32 PM, Marc Zyngier <marc.zyngier@arm.com> wrote:
> >> Well, that's not quite how it works. KVM is an integral part of the
> >> kernel, and I don't really want to have to deal with regression (not to
> >> mention that KVM is an essential tool in our testing infrastructure).
> >>
> >> You could try and exclude KVM from the instrumentation (which we already
> >> have for invasive things such as KASAN), but I'm afraid that having a
> >> debugging option that conflicts with another essential part of the
> >> kernel is not an option.
> >>
> >> I'm happy to help you with that though.
> >>
> > 
> > Hm, KHWASAN instruments the very same parts of the kernel that KASAN
> > does (it reuses the same flag). I've checked, I actually have
> > CONFIG_KVM enabled in my test build, however I haven't tried to test
> > KVM yet. I'm planning to perform extensive fuzzing of the kernel with
> > syzkaller, so if there's any crashes caused by KHWASAN in kvm code
> > I'll see them. However if some bugs don't manifest as crashes, that
> > would be a difficult thing to detect for me.
> 
> Well, if something is wrong in KVM, it usually manifests itself
> extremely quickly, and takes the whole box with it. I have the ugly
> feeling that feeding coloured pointers to KVM is going to be a fun ride
> though.
> 
> Also, last time I checked Clang couldn't even compile KVM correctly.
> Hopefully, things have changed...

It compiles; it's just not as position independent as it needs to be.

IIRC -fno-jump-tables is sufficient to get a clang-compiled KVM booting.

It would be much nicer if there was a flag to enforce the use of
pc-relative addressing, and forbid absolute addressing, so that we don't
have to disable each and every compiler feature that decides to use the
latter.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 14/14] khwasan: default the instrumentation mode to inline
  2018-03-09 18:06     ` Andrey Konovalov
@ 2018-03-09 19:18       ` Mark Rutland
  2018-03-12 13:10         ` Andrey Konovalov
  0 siblings, 1 reply; 65+ messages in thread
From: Mark Rutland @ 2018-03-09 19:18 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Fri, Mar 09, 2018 at 07:06:59PM +0100, Andrey Konovalov wrote:
> On Mon, Mar 5, 2018 at 3:54 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Fri, Mar 02, 2018 at 08:44:33PM +0100, Andrey Konovalov wrote:
> >> There are two reasons to use outline instrumentation:
> >> 1. Outline instrumentation reduces the size of the kernel text, and should
> >>    be used where this size matters.
> >> 2. Outline instrumentation is less invasive and can be used for debugging
> >>    for KASAN developers, when it's not clear whether some issue is caused
> >>    by KASAN or by something else.
> >>
> >> For the rest cases inline instrumentation is preferrable, since it's
> >> faster.
> >>
> >> This patch changes the default instrumentation mode to inline.
> >> ---
> >>  lib/Kconfig.kasan | 2 +-
> >>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> >> index ab34e7d7d3a7..8ea6ae26b4a3 100644
> >> --- a/lib/Kconfig.kasan
> >> +++ b/lib/Kconfig.kasan
> >> @@ -70,7 +70,7 @@ config KASAN_EXTRA
> >>  choice
> >>       prompt "Instrumentation type"
> >>       depends on KASAN
> >> -     default KASAN_OUTLINE
> >> +     default KASAN_INLINE
> >
> > Some compilers don't support KASAN_INLINE, but do support KASAN_OUTLINE.
> > IIRC that includes the latest clang release, but I could be wrong.
> >
> > If that's the case, changing the default here does not seem ideal.
> >
> 
> Hi Mark!
> 
> GCC before 5.0 doesn't support KASAN_INLINE, but AFAIU will fallback
> to outline instrumentation in this case.
> 
> Latest Clang Release doesn't support KASAN_INLINE (although current
> trunk does) and falls back to outline instrumentation.
> 
> So nothing should break, but people with newer compilers should get
> the benefits of using the inline instrumentation by default.

Ah, ok. I had assumed that they were separate compiler options, and this
would result in a build failure.

I have no strong feelings either way as to the default. I typically use
inline today unless I'm trying to debug particularly weird cases and
want to hack the shadow accesses.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 14/14] khwasan: default the instrumentation mode to inline
  2018-03-09 19:18       ` Mark Rutland
@ 2018-03-12 13:10         ` Andrey Konovalov
  0 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-12 13:10 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Fri, Mar 9, 2018 at 8:18 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Fri, Mar 09, 2018 at 07:06:59PM +0100, Andrey Konovalov wrote:
>> On Mon, Mar 5, 2018 at 3:54 PM, Mark Rutland <mark.rutland@arm.com> wrote:
>>
>> Hi Mark!
>>
>> GCC before 5.0 doesn't support KASAN_INLINE, but AFAIU will fallback
>> to outline instrumentation in this case.
>>
>> Latest Clang Release doesn't support KASAN_INLINE (although current
>> trunk does) and falls back to outline instrumentation.
>>
>> So nothing should break, but people with newer compilers should get
>> the benefits of using the inline instrumentation by default.
>
> Ah, ok. I had assumed that they were separate compiler options, and this
> would result in a build failure.

No worries, I'll check that GCC 4.9 works and add this info to the
commit message.

>
> I have no strong feelings either way as to the default. I typically use
> inline today unless I'm trying to debug particularly weird cases and
> want to hack the shadow accesses.

Great!

>
> Thanks,
> Mark.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 14/14] khwasan: default the instrumentation mode to inline
  2018-03-02 19:44 ` [RFC PATCH 14/14] khwasan: default the instrumentation mode to inline Andrey Konovalov
  2018-03-05 14:54   ` Mark Rutland
@ 2018-03-13 14:44   ` Alexander Potapenko
  2018-03-13 16:49     ` Andrey Konovalov
  1 sibling, 1 reply; 65+ messages in thread
From: Alexander Potapenko @ 2018-03-13 14:44 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Dmitry Vyukov, Jonathan Corbet, Catalin Marinas,
	Will Deacon, Theodore Ts'o, Jan Kara, Christopher Li,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Masahiro Yamada, Michal Marek, Mark Rutland,
	Ard Biesheuvel, Yury Norov, Nick Desaulniers, Marc Zyngier,
	Bob Picco, Suzuki K Poulose, Kristina Martsenko, Punit Agrawal,
	Dave Martin, James Morse, Julien Thierry, Michael Weiser,
	Steve Capper, Ingo Molnar, Thomas Gleixner, Sandipan Das,
	Paul Lawrence, David Woodhouse, Kees Cook, Geert Uytterhoeven,
	Josh Poimboeuf, Arnd Bergmann, kasan-dev, linux-doc, LKML,
	linux-arm-kernel, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Fri, Mar 2, 2018 at 8:44 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
> There are two reasons to use outline instrumentation:
> 1. Outline instrumentation reduces the size of the kernel text, and should
>    be used where this size matters.
> 2. Outline instrumentation is less invasive and can be used for debugging
>    for KASAN developers, when it's not clear whether some issue is caused
>    by KASAN or by something else.

Don't you think this patch can be landed separately from the KHWASAN series?

> For the rest cases inline instrumentation is preferrable, since it's
> faster.
>
> This patch changes the default instrumentation mode to inline.
> ---
>  lib/Kconfig.kasan | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index ab34e7d7d3a7..8ea6ae26b4a3 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -70,7 +70,7 @@ config KASAN_EXTRA
>  choice
>         prompt "Instrumentation type"
>         depends on KASAN
> -       default KASAN_OUTLINE
> +       default KASAN_INLINE
>
>  config KASAN_OUTLINE
>         bool "Outline instrumentation"
> --
> 2.16.2.395.g2e18187dfd-goog
>
Reviewed-by: Alexander Potapenko <glider@google.com>




-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-02 19:44 ` [RFC PATCH 09/14] khwasan: add hooks implementation Andrey Konovalov
  2018-03-05 14:44   ` Mark Rutland
@ 2018-03-13 15:05   ` Alexander Potapenko
  2018-03-13 17:00     ` Andrey Konovalov
  2018-03-20  0:44   ` Anthony Yznaga
  2 siblings, 1 reply; 65+ messages in thread
From: Alexander Potapenko @ 2018-03-13 15:05 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Dmitry Vyukov, Jonathan Corbet, Catalin Marinas,
	Will Deacon, Theodore Ts'o, Jan Kara, Christopher Li,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Masahiro Yamada, Michal Marek, Mark Rutland,
	Ard Biesheuvel, Yury Norov, Nick Desaulniers, Marc Zyngier,
	Bob Picco, Suzuki K Poulose, Kristina Martsenko, Punit Agrawal,
	Dave Martin, James Morse, Julien Thierry, Michael Weiser,
	Steve Capper, Ingo Molnar, Thomas Gleixner, Sandipan Das,
	Paul Lawrence, David Woodhouse, Kees Cook, Geert Uytterhoeven,
	Josh Poimboeuf, Arnd Bergmann, kasan-dev, linux-doc, LKML,
	linux-arm-kernel, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Fri, Mar 2, 2018 at 8:44 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
> This commit adds KHWASAN hooks implementation.
>
> 1. When a new slab cache is created, KHWASAN rounds up the size of the
>    objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).
>
> 2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory,
>    that corresponds to this object to this tag, and embeds this tag value
>    into the top byte of the returned pointer.
>
> 3. On each kfree KHWASAN poisons the shadow memory with a random tag to
>    allow detection of use-after-free bugs.
>
> The rest of the logic of the hook implementation is very much similar to
> the one provided by KASAN. KHWASAN saves allocation and free stack metadata
> to the slab object the same was KASAN does this.
> ---
>  mm/kasan/khwasan.c | 178 ++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 175 insertions(+), 3 deletions(-)
>
> diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c
> index 21a2221e3368..09d6f0a72266 100644
> --- a/mm/kasan/khwasan.c
> +++ b/mm/kasan/khwasan.c
> @@ -78,69 +78,238 @@ void *khwasan_reset_tag(void *addr)
>         return reset_tag(addr);
>  }
>
> +void kasan_poison_shadow(const void *address, size_t size, u8 value)
> +{
> +       void *shadow_start, *shadow_end;
> +
> +       /* Perform shadow offset calculation based on untagged address */
> +       address = reset_tag((void *)address);
> +
> +       shadow_start = kasan_mem_to_shadow(address);
> +       shadow_end = kasan_mem_to_shadow(address + size);
> +
> +       memset(shadow_start, value, shadow_end - shadow_start);
> +}
> +
>  void kasan_unpoison_shadow(const void *address, size_t size)
>  {
> +       /* KHWASAN only allows 16-byte granularity */
> +       size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
> +       kasan_poison_shadow(address, size, get_tag(address));
>  }
>
>  void check_memory_region(unsigned long addr, size_t size, bool write,
>                                 unsigned long ret_ip)
>  {
> +       u8 tag;
> +       u8 *shadow_first, *shadow_last, *shadow;
> +       void *untagged_addr;
> +
> +       tag = get_tag((void *)addr);
> +       untagged_addr = reset_tag((void *)addr);
> +       shadow_first = (u8 *)kasan_mem_to_shadow(untagged_addr);
> +       shadow_last = (u8 *)kasan_mem_to_shadow(untagged_addr + size - 1);
> +
> +       for (shadow = shadow_first; shadow <= shadow_last; shadow++) {
> +               if (*shadow != tag) {
> +                       /* Report invalid-access bug here */
> +                       return;
> +               }
> +       }
>  }
>
>  void kasan_free_pages(struct page *page, unsigned int order)
>  {
> +       if (likely(!PageHighMem(page)))
> +               kasan_poison_shadow(page_address(page),
> +                               PAGE_SIZE << order,
> +                               khwasan_random_tag());
>  }
>
>  void kasan_cache_create(struct kmem_cache *cache, size_t *size,
>                 slab_flags_t *flags)
>  {
> +       int orig_size = *size;
> +
> +       cache->kasan_info.alloc_meta_offset = *size;
> +       *size += sizeof(struct kasan_alloc_meta);
> +
> +       if (*size % KASAN_SHADOW_SCALE_SIZE != 0)
> +               *size = round_up(*size, KASAN_SHADOW_SCALE_SIZE);
> +
> +
> +       if (*size > KMALLOC_MAX_SIZE) {
> +               *size = orig_size;
> +               return;
> +       }
> +
> +       cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);
> +
> +       *flags |= SLAB_KASAN;
>  }
>
>  void kasan_poison_slab(struct page *page)
>  {
> +       kasan_poison_shadow(page_address(page),
> +                       PAGE_SIZE << compound_order(page),
> +                       khwasan_random_tag());
>  }
>
>  void kasan_poison_object_data(struct kmem_cache *cache, void *object)
>  {
> +       kasan_poison_shadow(object,
> +                       round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE),
> +                       khwasan_random_tag());
>  }
>
>  void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags)
>  {
> +       if (!READ_ONCE(khwasan_enabled))
> +               return object;
> +       object = kasan_kmalloc(cache, object, cache->object_size, flags);
> +       if (unlikely(cache->ctor)) {
> +               // Cache constructor might use object's pointer value to
> +               // initialize some of its fields.
> +               cache->ctor(object);
> +       }
>         return object;
>  }
>
> -bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> +static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
> +                               unsigned long ip)
>  {
> +       u8 shadow_byte;
> +       u8 tag;
> +       unsigned long rounded_up_size;
> +       void *untagged_addr = reset_tag(object);
> +
> +       if (unlikely(nearest_obj(cache, virt_to_head_page(untagged_addr),
> +                       untagged_addr) != untagged_addr)) {
> +               /* Report invalid-free here */
> +               return true;
> +       }
> +
> +       /* RCU slabs could be legally used after free within the RCU period */
> +       if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU))
> +               return false;
> +
> +       shadow_byte = READ_ONCE(*(u8 *)kasan_mem_to_shadow(untagged_addr));
> +       tag = get_tag(object);
> +       if (tag != shadow_byte) {
> +               /* Report invalid-free here */
> +               return true;
> +       }
> +
> +       rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE);
> +       kasan_poison_shadow(object, rounded_up_size, khwasan_random_tag());
> +
> +       if (unlikely(!(cache->flags & SLAB_KASAN)))
> +               return false;
> +
> +       set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT);
>         return false;
>  }
>
> +bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
> +{
> +       return __kasan_slab_free(cache, object, ip);
> +}
> +
>  void *kasan_kmalloc(struct kmem_cache *cache, const void *object,
>                         size_t size, gfp_t flags)
>  {
> -       return (void *)object;
> +       unsigned long redzone_start, redzone_end;
> +       u8 tag;
> +
> +       if (!READ_ONCE(khwasan_enabled))
> +               return (void *)object;
> +
> +       if (unlikely(object == NULL))
> +               return NULL;
> +
> +       redzone_start = round_up((unsigned long)(object + size),
> +                               KASAN_SHADOW_SCALE_SIZE);
> +       redzone_end = round_up((unsigned long)(object + cache->object_size),
> +                               KASAN_SHADOW_SCALE_SIZE);
> +
> +       tag = khwasan_random_tag();
> +       kasan_poison_shadow(object, redzone_start - (unsigned long)object, tag);
> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +               khwasan_random_tag());
> +
> +       if (cache->flags & SLAB_KASAN)
> +               set_track(&get_alloc_info(cache, object)->alloc_track, flags);
> +
> +       return set_tag((void *)object, tag);
>  }
>  EXPORT_SYMBOL(kasan_kmalloc);
>
>  void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
>  {
> -       return (void *)ptr;
> +       unsigned long redzone_start, redzone_end;
> +       u8 tag;
> +       struct page *page;
> +
> +       if (!READ_ONCE(khwasan_enabled))
> +               return (void *)ptr;
> +
> +       if (unlikely(ptr == NULL))
> +               return NULL;
> +
> +       page = virt_to_page(ptr);
> +       redzone_start = round_up((unsigned long)(ptr + size),
> +                               KASAN_SHADOW_SCALE_SIZE);
> +       redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
> +
> +       tag = khwasan_random_tag();
> +       kasan_poison_shadow(ptr, redzone_start - (unsigned long)ptr, tag);
> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
> +               khwasan_random_tag());
Am I understanding right that the object and the redzone may receive
identical tags here?
Does it make sense to generate the redzone tag from the object tag
(e.g. by addding 1 to it)?
> +       return set_tag((void *)ptr, tag);
>  }
>
>  void kasan_poison_kfree(void *ptr, unsigned long ip)
>  {
> +       struct page *page;
> +
> +       page = virt_to_head_page(ptr);
> +
> +       if (unlikely(!PageSlab(page))) {
> +               if (reset_tag(ptr) != page_address(page)) {
> +                       /* Report invalid-free here */
> +                       return;
> +               }
> +               kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
> +                                       khwasan_random_tag());
> +       } else {
> +               __kasan_slab_free(page->slab_cache, ptr, ip);
> +       }
>  }
>
>  void kasan_kfree_large(void *ptr, unsigned long ip)
>  {
> +       struct page *page = virt_to_page(ptr);
> +       struct page *head_page = virt_to_head_page(ptr);
> +
> +       if (reset_tag(ptr) != page_address(head_page)) {
> +               /* Report invalid-free here */
> +               return;
> +       }
> +
> +       kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
> +                       khwasan_random_tag());
>  }
>
>  #define DEFINE_HWASAN_LOAD_STORE(size)                                 \
>         void __hwasan_load##size##_noabort(unsigned long addr)          \
>         {                                                               \
> +               check_memory_region(addr, size, false, _RET_IP_);       \
>         }                                                               \
>         EXPORT_SYMBOL(__hwasan_load##size##_noabort);                   \
>         void __hwasan_store##size##_noabort(unsigned long addr)         \
>         {                                                               \
> +               check_memory_region(addr, size, true, _RET_IP_);        \
>         }                                                               \
>         EXPORT_SYMBOL(__hwasan_store##size##_noabort)
>
> @@ -152,15 +321,18 @@ DEFINE_HWASAN_LOAD_STORE(16);
>
>  void __hwasan_loadN_noabort(unsigned long addr, unsigned long size)
>  {
> +       check_memory_region(addr, size, false, _RET_IP_);
>  }
>  EXPORT_SYMBOL(__hwasan_loadN_noabort);
>
>  void __hwasan_storeN_noabort(unsigned long addr, unsigned long size)
>  {
> +       check_memory_region(addr, size, true, _RET_IP_);
>  }
>  EXPORT_SYMBOL(__hwasan_storeN_noabort);
>
>  void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size)
>  {
> +       kasan_poison_shadow((void *)addr, size, tag);
>  }
>  EXPORT_SYMBOL(__hwasan_tag_memory);
> --
> 2.16.2.395.g2e18187dfd-goog
>



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 14/14] khwasan: default the instrumentation mode to inline
  2018-03-13 14:44   ` Alexander Potapenko
@ 2018-03-13 16:49     ` Andrey Konovalov
  0 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-13 16:49 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Andrey Ryabinin, Dmitry Vyukov, Jonathan Corbet, Catalin Marinas,
	Will Deacon, Theodore Ts'o, Jan Kara, Christopher Li,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Masahiro Yamada, Michal Marek, Mark Rutland,
	Ard Biesheuvel, Yury Norov, Nick Desaulniers, Marc Zyngier,
	Bob Picco, Suzuki K Poulose, Kristina Martsenko, Punit Agrawal,
	Dave Martin, James Morse, Julien Thierry, Michael Weiser,
	Steve Capper, Ingo Molnar, Thomas Gleixner, Sandipan Das,
	Paul Lawrence, David Woodhouse, Kees Cook, Geert Uytterhoeven,
	Josh Poimboeuf, Arnd Bergmann, kasan-dev, linux-doc, LKML,
	Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Tue, Mar 13, 2018 at 3:44 PM, Alexander Potapenko <glider@google.com> wrote:
> On Fri, Mar 2, 2018 at 8:44 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
>> There are two reasons to use outline instrumentation:
>> 1. Outline instrumentation reduces the size of the kernel text, and should
>>    be used where this size matters.
>> 2. Outline instrumentation is less invasive and can be used for debugging
>>    for KASAN developers, when it's not clear whether some issue is caused
>>    by KASAN or by something else.
>
> Don't you think this patch can be landed separately from the KHWASAN series?

Sure, I can mail it separately.

>
>> For the rest cases inline instrumentation is preferrable, since it's
>> faster.
>>
>> This patch changes the default instrumentation mode to inline.
>> ---
>>  lib/Kconfig.kasan | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>> index ab34e7d7d3a7..8ea6ae26b4a3 100644
>> --- a/lib/Kconfig.kasan
>> +++ b/lib/Kconfig.kasan
>> @@ -70,7 +70,7 @@ config KASAN_EXTRA
>>  choice
>>         prompt "Instrumentation type"
>>         depends on KASAN
>> -       default KASAN_OUTLINE
>> +       default KASAN_INLINE
>>
>>  config KASAN_OUTLINE
>>         bool "Outline instrumentation"
>> --
>> 2.16.2.395.g2e18187dfd-goog
>>
> Reviewed-by: Alexander Potapenko <glider@google.com>
>
>
>
>
> --
> Alexander Potapenko
> Software Engineer
>
> Google Germany GmbH
> Erika-Mann-Straße, 33
> 80636 München
>
> Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
> Registergericht und -nummer: Hamburg, HRB 86891
> Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-13 15:05   ` Alexander Potapenko
@ 2018-03-13 17:00     ` Andrey Konovalov
  2018-03-15 16:52       ` Andrey Ryabinin
  0 siblings, 1 reply; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-13 17:00 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Andrey Ryabinin, Dmitry Vyukov, Jonathan Corbet, Catalin Marinas,
	Will Deacon, Theodore Ts'o, Jan Kara, Christopher Li,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Masahiro Yamada, Michal Marek, Mark Rutland,
	Ard Biesheuvel, Yury Norov, Nick Desaulniers, Marc Zyngier,
	Bob Picco, Suzuki K Poulose, Kristina Martsenko, Punit Agrawal,
	Dave Martin, James Morse, Julien Thierry, Michael Weiser,
	Steve Capper, Ingo Molnar, Thomas Gleixner, Sandipan Das,
	Paul Lawrence, David Woodhouse, Kees Cook, Geert Uytterhoeven,
	Josh Poimboeuf, Arnd Bergmann, kasan-dev, linux-doc, LKML,
	Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Tue, Mar 13, 2018 at 4:05 PM, 'Alexander Potapenko' via kasan-dev
<kasan-dev@googlegroups.com> wrote:
> On Fri, Mar 2, 2018 at 8:44 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
>>  void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
>>  {
>> -       return (void *)ptr;
>> +       unsigned long redzone_start, redzone_end;
>> +       u8 tag;
>> +       struct page *page;
>> +
>> +       if (!READ_ONCE(khwasan_enabled))
>> +               return (void *)ptr;
>> +
>> +       if (unlikely(ptr == NULL))
>> +               return NULL;
>> +
>> +       page = virt_to_page(ptr);
>> +       redzone_start = round_up((unsigned long)(ptr + size),
>> +                               KASAN_SHADOW_SCALE_SIZE);
>> +       redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
>> +
>> +       tag = khwasan_random_tag();
>> +       kasan_poison_shadow(ptr, redzone_start - (unsigned long)ptr, tag);
>> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>> +               khwasan_random_tag());

> Am I understanding right that the object and the redzone may receive
> identical tags here?

Correct.

> Does it make sense to generate the redzone tag from the object tag
> (e.g. by addding 1 to it)?

Yes, I think so, will do!

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-13 17:00     ` Andrey Konovalov
@ 2018-03-15 16:52       ` Andrey Ryabinin
  2018-03-16 18:09         ` Andrey Konovalov
  0 siblings, 1 reply; 65+ messages in thread
From: Andrey Ryabinin @ 2018-03-15 16:52 UTC (permalink / raw)
  To: Andrey Konovalov, Alexander Potapenko
  Cc: Dmitry Vyukov, Jonathan Corbet, Catalin Marinas, Will Deacon,
	Theodore Ts'o, Jan Kara, Christopher Li, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Masahiro Yamada, Michal Marek, Mark Rutland, Ard Biesheuvel,
	Yury Norov, Nick Desaulniers, Marc Zyngier, Bob Picco,
	Suzuki K Poulose, Kristina Martsenko, Punit Agrawal, Dave Martin,
	James Morse, Julien Thierry, Michael Weiser, Steve Capper,
	Ingo Molnar, Thomas Gleixner, Sandipan Das, Paul Lawrence,
	David Woodhouse, Kees Cook, Geert Uytterhoeven, Josh Poimboeuf,
	Arnd Bergmann, kasan-dev, linux-doc, LKML, Linux ARM, linux-ext4,
	linux-sparse, Linux Memory Management List,
	Linux Kbuild mailing list, Kostya Serebryany, Evgeniy Stepanov,
	Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan,
	Kees Cook, Jann Horn, Mark Brand



On 03/13/2018 08:00 PM, Andrey Konovalov wrote:
> On Tue, Mar 13, 2018 at 4:05 PM, 'Alexander Potapenko' via kasan-dev
> <kasan-dev@googlegroups.com> wrote:
>> On Fri, Mar 2, 2018 at 8:44 PM, Andrey Konovalov <andreyknvl@google.com> wrote:
>>>  void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
>>>  {
>>> -       return (void *)ptr;
>>> +       unsigned long redzone_start, redzone_end;
>>> +       u8 tag;
>>> +       struct page *page;
>>> +
>>> +       if (!READ_ONCE(khwasan_enabled))
>>> +               return (void *)ptr;
>>> +
>>> +       if (unlikely(ptr == NULL))
>>> +               return NULL;
>>> +
>>> +       page = virt_to_page(ptr);
>>> +       redzone_start = round_up((unsigned long)(ptr + size),
>>> +                               KASAN_SHADOW_SCALE_SIZE);
>>> +       redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
>>> +
>>> +       tag = khwasan_random_tag();
>>> +       kasan_poison_shadow(ptr, redzone_start - (unsigned long)ptr, tag);
>>> +       kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
>>> +               khwasan_random_tag());
> 
>> Am I understanding right that the object and the redzone may receive
>> identical tags here?
> 
> Correct.
> 
>> Does it make sense to generate the redzone tag from the object tag
>> (e.g. by addding 1 to it)?
> 
> Yes, I think so, will do!
> 

Wouldn't be better to have some reserved tag value for invalid memory (redzones/free), so that
we catch access to such memory with 100% probability?

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-15 16:52       ` Andrey Ryabinin
@ 2018-03-16 18:09         ` Andrey Konovalov
  2018-03-16 18:16           ` Evgenii Stepanov
  0 siblings, 1 reply; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-16 18:09 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Alexander Potapenko, Dmitry Vyukov, Jonathan Corbet,
	Catalin Marinas, Will Deacon, Theodore Ts'o, Jan Kara,
	Christopher Li, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Masahiro Yamada, Michal Marek,
	Mark Rutland, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Bob Picco, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Thu, Mar 15, 2018 at 5:52 PM, Andrey Ryabinin
<aryabinin@virtuozzo.com> wrote:
> On 03/13/2018 08:00 PM, Andrey Konovalov wrote:
>> On Tue, Mar 13, 2018 at 4:05 PM, 'Alexander Potapenko' via kasan-dev
>> <kasan-dev@googlegroups.com> wrote:
>>> Does it make sense to generate the redzone tag from the object tag
>>> (e.g. by addding 1 to it)?
>>
>> Yes, I think so, will do!
>>
>
> Wouldn't be better to have some reserved tag value for invalid memory (redzones/free), so that
> we catch access to such memory with 100% probability?

We could do that. That would reduce the chance to detect a
use-after-free though, since we're using fewer different tag values
for the objects themselves. I don't have a strong opinion about which
one is better though.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-16 18:09         ` Andrey Konovalov
@ 2018-03-16 18:16           ` Evgenii Stepanov
  2018-03-16 18:24             ` Andrey Konovalov
  0 siblings, 1 reply; 65+ messages in thread
From: Evgenii Stepanov @ 2018-03-16 18:16 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Bob Picco, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Lee Smith, Ramana Radhakrishnan,
	Jacob Bramley, Ruben Ayrapetyan, Kees Cook, Jann Horn,
	Mark Brand

On Fri, Mar 16, 2018 at 11:09 AM, Andrey Konovalov
<andreyknvl@google.com> wrote:
> On Thu, Mar 15, 2018 at 5:52 PM, Andrey Ryabinin
> <aryabinin@virtuozzo.com> wrote:
>> On 03/13/2018 08:00 PM, Andrey Konovalov wrote:
>>> On Tue, Mar 13, 2018 at 4:05 PM, 'Alexander Potapenko' via kasan-dev
>>> <kasan-dev@googlegroups.com> wrote:
>>>> Does it make sense to generate the redzone tag from the object tag
>>>> (e.g. by addding 1 to it)?
>>>
>>> Yes, I think so, will do!
>>>
>>
>> Wouldn't be better to have some reserved tag value for invalid memory (redzones/free), so that
>> we catch access to such memory with 100% probability?
>
> We could do that. That would reduce the chance to detect a
> use-after-free though, since we're using fewer different tag values
> for the objects themselves. I don't have a strong opinion about which
> one is better though.

hwasan does not need redzones. As for use-after-free, to catch it with
100% probability one would need infinite memory for the quarantine. It
is possible to guarantee 100% detection of linear buffer overflow by
giving live adjacent chunks distinct tags.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-16 18:16           ` Evgenii Stepanov
@ 2018-03-16 18:24             ` Andrey Konovalov
  2018-03-16 18:45               ` Evgenii Stepanov
  0 siblings, 1 reply; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-16 18:24 UTC (permalink / raw)
  To: Evgenii Stepanov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Lee Smith, Ramana Radhakrishnan,
	Jacob Bramley, Ruben Ayrapetyan, Kees Cook, Jann Horn,
	Mark Brand

On Fri, Mar 16, 2018 at 7:16 PM, Evgenii Stepanov <eugenis@google.com> wrote:
> On Fri, Mar 16, 2018 at 11:09 AM, Andrey Konovalov
> <andreyknvl@google.com> wrote:
>> On Thu, Mar 15, 2018 at 5:52 PM, Andrey Ryabinin
>>> Wouldn't be better to have some reserved tag value for invalid memory (redzones/free), so that
>>> we catch access to such memory with 100% probability?
>>
>> We could do that. That would reduce the chance to detect a
>> use-after-free though, since we're using fewer different tag values
>> for the objects themselves. I don't have a strong opinion about which
>> one is better though.

Note: I misread the message and didn't notice the "/free" part there,
so I was considering marking only redzones with a reserved tag value.

>
> hwasan does not need redzones.

Right, by redzones in this case I meant the metadata that is stored
right after the object (which includes alloc and free stack handles
and perhaps some other allocator stuff).

> As for use-after-free, to catch it with
> 100% probability one would need infinite memory for the quarantine. It
> is possible to guarantee 100% detection of linear buffer overflow by
> giving live adjacent chunks distinct tags.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-16 18:24             ` Andrey Konovalov
@ 2018-03-16 18:45               ` Evgenii Stepanov
  2018-03-16 19:06                 ` Andrey Konovalov
  0 siblings, 1 reply; 65+ messages in thread
From: Evgenii Stepanov @ 2018-03-16 18:45 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Lee Smith, Ramana Radhakrishnan,
	Jacob Bramley, Ruben Ayrapetyan, Kees Cook, Jann Horn,
	Mark Brand

On Fri, Mar 16, 2018 at 11:24 AM, Andrey Konovalov
<andreyknvl@google.com> wrote:
> On Fri, Mar 16, 2018 at 7:16 PM, Evgenii Stepanov <eugenis@google.com> wrote:
>> On Fri, Mar 16, 2018 at 11:09 AM, Andrey Konovalov
>> <andreyknvl@google.com> wrote:
>>> On Thu, Mar 15, 2018 at 5:52 PM, Andrey Ryabinin
>>>> Wouldn't be better to have some reserved tag value for invalid memory (redzones/free), so that
>>>> we catch access to such memory with 100% probability?
>>>
>>> We could do that. That would reduce the chance to detect a
>>> use-after-free though, since we're using fewer different tag values
>>> for the objects themselves. I don't have a strong opinion about which
>>> one is better though.
>
> Note: I misread the message and didn't notice the "/free" part there,
> so I was considering marking only redzones with a reserved tag value.
>
>>
>> hwasan does not need redzones.
>
> Right, by redzones in this case I meant the metadata that is stored
> right after the object (which includes alloc and free stack handles
> and perhaps some other allocator stuff).

Oh, I did not realize we have free (as in beer, not as in
use-after-free) redzones between allocations. Yes, reserving a color
sounds
like a good idea.

>
>> As for use-after-free, to catch it with
>> 100% probability one would need infinite memory for the quarantine. It
>> is possible to guarantee 100% detection of linear buffer overflow by
>> giving live adjacent chunks distinct tags.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-16 18:45               ` Evgenii Stepanov
@ 2018-03-16 19:06                 ` Andrey Konovalov
  2018-03-16 20:21                   ` Evgenii Stepanov
  0 siblings, 1 reply; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-16 19:06 UTC (permalink / raw)
  To: Evgenii Stepanov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Lee Smith, Ramana Radhakrishnan,
	Jacob Bramley, Ruben Ayrapetyan, Kees Cook, Jann Horn,
	Mark Brand

On Fri, Mar 16, 2018 at 7:45 PM, Evgenii Stepanov <eugenis@google.com> wrote:
> On Fri, Mar 16, 2018 at 11:24 AM, Andrey Konovalov
> <andreyknvl@google.com> wrote:
>> Right, by redzones in this case I meant the metadata that is stored
>> right after the object (which includes alloc and free stack handles
>> and perhaps some other allocator stuff).
>
> Oh, I did not realize we have free (as in beer, not as in
> use-after-free) redzones between allocations. Yes, reserving a color
> sounds
> like a good idea.

OK, I'll do that then.

>
>>
>>> As for use-after-free, to catch it with
>>> 100% probability one would need infinite memory for the quarantine.

As for the second part of Andrey's suggestion (as far as I understand
it): reserve a color for freed objects. Without quarantine, this
should give us a precise
use-after-free-but-without-someone-else-allocating-the-same-object
detection. What do you think about that?

>>> It
>>> is possible to guarantee 100% detection of linear buffer overflow by
>>> giving live adjacent chunks distinct tags.

I'll add that to the TODO list as well.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-16 19:06                 ` Andrey Konovalov
@ 2018-03-16 20:21                   ` Evgenii Stepanov
  0 siblings, 0 replies; 65+ messages in thread
From: Evgenii Stepanov @ 2018-03-16 20:21 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Lee Smith, Ramana Radhakrishnan,
	Jacob Bramley, Ruben Ayrapetyan, Kees Cook, Jann Horn,
	Mark Brand

On Fri, Mar 16, 2018 at 12:06 PM, Andrey Konovalov
<andreyknvl@google.com> wrote:
> On Fri, Mar 16, 2018 at 7:45 PM, Evgenii Stepanov <eugenis@google.com> wrote:
>> On Fri, Mar 16, 2018 at 11:24 AM, Andrey Konovalov
>> <andreyknvl@google.com> wrote:
>>> Right, by redzones in this case I meant the metadata that is stored
>>> right after the object (which includes alloc and free stack handles
>>> and perhaps some other allocator stuff).
>>
>> Oh, I did not realize we have free (as in beer, not as in
>> use-after-free) redzones between allocations. Yes, reserving a color
>> sounds
>> like a good idea.
>
> OK, I'll do that then.
>
>>
>>>
>>>> As for use-after-free, to catch it with
>>>> 100% probability one would need infinite memory for the quarantine.
>
> As for the second part of Andrey's suggestion (as far as I understand
> it): reserve a color for freed objects. Without quarantine, this
> should give us a precise
> use-after-free-but-without-someone-else-allocating-the-same-object
> detection. What do you think about that?

Still non-deterministic, but we can use the same color we reserved for
the redzones, why not.

>
>>>> It
>>>> is possible to guarantee 100% detection of linear buffer overflow by
>>>> giving live adjacent chunks distinct tags.
>
> I'll add that to the TODO list as well.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-02 19:44 ` [RFC PATCH 09/14] khwasan: add hooks implementation Andrey Konovalov
  2018-03-05 14:44   ` Mark Rutland
  2018-03-13 15:05   ` Alexander Potapenko
@ 2018-03-20  0:44   ` Anthony Yznaga
  2018-03-20 13:43     ` Andrey Konovalov
  2 siblings, 1 reply; 65+ messages in thread
From: Anthony Yznaga @ 2018-03-20  0:44 UTC (permalink / raw)
  To: Andrey Konovalov, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Jonathan Corbet, Catalin Marinas, Will Deacon,
	Theodore Ts'o, Jan Kara, Christopher Li, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Masahiro Yamada, Michal Marek, Mark Rutland, Ard Biesheuvel,
	Yury Norov, Nick Desaulniers, Marc Zyngier, Bob Picco,
	Suzuki K Poulose, Kristina Martsenko, Punit Agrawal, Dave Martin,
	James Morse, Julien Thierry, Michael Weiser, Steve Capper,
	Ingo Molnar, Thomas Gleixner, Sandipan Das, Paul Lawrence,
	David Woodhouse, Kees Cook, Geert Uytterhoeven, Josh Poimboeuf,
	Arnd Bergmann, kasan-dev, linux-doc, linux-kernel,
	linux-arm-kernel, linux-ext4, linux-sparse, linux-mm,
	linux-kbuild, Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

Hi Andrey,

On 3/2/18 11:44 AM, Andrey Konovalov wrote:
> void kasan_poison_kfree(void *ptr, unsigned long ip)
>  {
> +	struct page *page;
> +
> +	page = virt_to_head_page(ptr)

An untagged addr should be passed to virt_to_head_page(), no?

> +
> +	if (unlikely(!PageSlab(page))) {
> +		if (reset_tag(ptr) != page_address(page)) {
> +			/* Report invalid-free here */
> +			return;
> +		}
> +		kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
> +					khwasan_random_tag());
> +	} else {
> +		__kasan_slab_free(page->slab_cache, ptr, ip);
> +	}
>  }
>  
>  void kasan_kfree_large(void *ptr, unsigned long ip)
>  {
> +	struct page *page = virt_to_page(ptr);
> +	struct page *head_page = virt_to_head_page(ptr);

Same as above and for virt_to_page() as well.

Anthony


> +
> +	if (reset_tag(ptr) != page_address(head_page)) {
> +		/* Report invalid-free here */
> +		return;
> +	}
> +
> +	kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
> +			khwasan_random_tag());
>  }

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 09/14] khwasan: add hooks implementation
  2018-03-20  0:44   ` Anthony Yznaga
@ 2018-03-20 13:43     ` Andrey Konovalov
  0 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-20 13:43 UTC (permalink / raw)
  To: Anthony Yznaga
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Mark Rutland, Ard Biesheuvel, Yury Norov,
	Nick Desaulniers, Marc Zyngier, Suzuki K Poulose,
	Kristina Martsenko, Punit Agrawal, Dave Martin, James Morse,
	Julien Thierry, Michael Weiser, Steve Capper, Ingo Molnar,
	Thomas Gleixner, Sandipan Das, Paul Lawrence, David Woodhouse,
	Kees Cook, Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann,
	kasan-dev, linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Tue, Mar 20, 2018 at 1:44 AM, Anthony Yznaga
<anthony.yznaga@oracle.com> wrote:
> Hi Andrey,
>
> On 3/2/18 11:44 AM, Andrey Konovalov wrote:
>> void kasan_poison_kfree(void *ptr, unsigned long ip)
>>  {
>> +     struct page *page;
>> +
>> +     page = virt_to_head_page(ptr)
>
> An untagged addr should be passed to virt_to_head_page(), no?

Hi!

virt_to_head_page() relies on virt_to_phys(), and the latter will be
fixed to accept tagged pointers in the next patchset.

Thanks!

>
>> +
>> +     if (unlikely(!PageSlab(page))) {
>> +             if (reset_tag(ptr) != page_address(page)) {
>> +                     /* Report invalid-free here */
>> +                     return;
>> +             }
>> +             kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
>> +                                     khwasan_random_tag());
>> +     } else {
>> +             __kasan_slab_free(page->slab_cache, ptr, ip);
>> +     }
>>  }
>>
>>  void kasan_kfree_large(void *ptr, unsigned long ip)
>>  {
>> +     struct page *page = virt_to_page(ptr);
>> +     struct page *head_page = virt_to_head_page(ptr);
>
> Same as above and for virt_to_page() as well.
>
> Anthony
>
>
>> +
>> +     if (reset_tag(ptr) != page_address(head_page)) {
>> +             /* Report invalid-free here */
>> +             return;
>> +     }
>> +
>> +     kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page),
>> +                     khwasan_random_tag());
>>  }

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 11/14] khwasan: add brk handler for inline instrumentation
  2018-03-05 14:51   ` Mark Rutland
@ 2018-03-23 15:59     ` Andrey Konovalov
  2018-03-24  3:42       ` Ard Biesheuvel
  2018-03-26  9:36       ` Mark Rutland
  0 siblings, 2 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-23 15:59 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Mon, Mar 5, 2018 at 3:51 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Fri, Mar 02, 2018 at 08:44:30PM +0100, Andrey Konovalov wrote:
>> KHWASAN inline instrumentation mode (which embeds checks of shadow memory
>> into the generated code, instead of inserting a callback) generates a brk
>> instruction when a tag mismatch is detected.
>
> The compiler generates the BRK?

Correct.

>
> I'm a little worried about the ABI implications of that. So far we've
> assumed that for hte kernel-side, the BRK space is completely under our
> control.
>
> How much does this save, compared to having a callback?

Around 7% of code size is what I see (you can have the same single
instruction for a call, but it may cost some register allocation
troubles).

>
>> This commit add a KHWASAN brk handler, that decodes the immediate value
>> passed to the brk instructions (to extract information about the memory
>> access that triggered the mismatch), reads the register values (x0 contains
>> the guilty address) and reports the bug.
>> ---
>>  arch/arm64/include/asm/brk-imm.h |  2 ++
>>  arch/arm64/kernel/traps.c        | 40 ++++++++++++++++++++++++++++++++
>>  2 files changed, 42 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h
>> index ed693c5bcec0..e4a7013321dc 100644
>> --- a/arch/arm64/include/asm/brk-imm.h
>> +++ b/arch/arm64/include/asm/brk-imm.h
>> @@ -16,10 +16,12 @@
>>   * 0x400: for dynamic BRK instruction
>>   * 0x401: for compile time BRK instruction
>>   * 0x800: kernel-mode BUG() and WARN() traps
>> + * 0x9xx: KHWASAN trap (allowed values 0x900 - 0x9ff)
>>   */
>>  #define FAULT_BRK_IMM                        0x100
>>  #define KGDB_DYN_DBG_BRK_IMM         0x400
>>  #define KGDB_COMPILED_DBG_BRK_IMM    0x401
>>  #define BUG_BRK_IMM                  0x800
>> +#define KHWASAN_BRK_IMM                      0x900
>>
>>  #endif
>> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
>> index eb2d15147e8d..5df8cdf5af13 100644
>> --- a/arch/arm64/kernel/traps.c
>> +++ b/arch/arm64/kernel/traps.c
>> @@ -35,6 +35,7 @@
>>  #include <linux/sizes.h>
>>  #include <linux/syscalls.h>
>>  #include <linux/mm_types.h>
>> +#include <linux/kasan.h>
>>
>>  #include <asm/atomic.h>
>>  #include <asm/bug.h>
>> @@ -771,6 +772,38 @@ static struct break_hook bug_break_hook = {
>>       .fn = bug_handler,
>>  };
>>
>> +#ifdef CONFIG_KASAN_TAGS
>> +static int khwasan_handler(struct pt_regs *regs, unsigned int esr)
>> +{
>> +     bool recover = esr & 0x20;
>> +     bool write = esr & 0x10;
>
> Can you please add mnemonics for these, e.g.
>
> #define KHWASAN_ESR_RECOVER             0x20
> #define KHWASAN_ESR_WRITE               0x10
>
>> +     size_t size = 1 << (esr & 0xf);
>
> #define KHWASAN_ESR_SIZE_MASK           0x4
> #define KHWASAN_ESR_SIZE(esr)   (1 << (esr) & KHWASAN_ESR_SIZE_MASK)

Will do!

>
>> +     u64 addr = regs->regs[0];
>> +     u64 pc = regs->pc;
>> +
>> +     if (user_mode(regs))
>> +             return DBG_HOOK_ERROR;
>> +
>> +     khwasan_report(addr, size, write, pc);
>> +
>> +     if (!recover)
>> +             die("Oops - KHWASAN", regs, 0);
>
> Could you elaborate on what "recover" means, and why it's up the the
> compiler to decide if the kernel should die()?

The instrumentation allows to control whether we can proceed after a
crash was detected. This is done by passing the -recover flag to the
compiler. Disabling recovery allows to generate more compact code.

Unfortunately disabling recovery doesn't work for the kernel right
now. KHWASAN reporting is disabled in some contexts (for example when
the allocator accesses slab object metadata; same is true for KASAN;
this is controlled by current->kasan_depth). All these accesses are
detected by the tool, even though the reports for them are not
printed.

This is something that might be fixed at some point in the future, so
I think it makes sense to leave this check as is.

I'll add a comment with explanations though.

>
>> +
>> +     /* If thread survives, skip over the BUG instruction and continue: */
>> +     arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
>
> This is for fast-forwarding user instruction streams, and isn't correct
> to call for kernel faults (as it'll mess up the userspace single step
> logic).

I saw BUG handler using this (which also inserts a brk), so I used it
as well. What should I do instead to jump over the faulting brk
instruction?

Thanks!

>
> Thanks,
> Mark.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 11/14] khwasan: add brk handler for inline instrumentation
  2018-03-23 15:59     ` Andrey Konovalov
@ 2018-03-24  3:42       ` Ard Biesheuvel
  2018-03-26  9:36       ` Mark Rutland
  1 sibling, 0 replies; 65+ messages in thread
From: Ard Biesheuvel @ 2018-03-24  3:42 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Mark Rutland, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Jonathan Corbet, Catalin Marinas, Will Deacon,
	Theodore Ts'o, Jan Kara, Christopher Li, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Masahiro Yamada, Michal Marek, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	Linux Doc Mailing List, LKML, Linux ARM, linux-ext4,
	Linux-Sparse, Linux Memory Management List,
	Linux Kbuild mailing list, Kostya Serebryany, Evgeniy Stepanov,
	Lee Smith, Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan,
	Kees Cook, Jann Horn, Mark Brand

On 23 March 2018 at 15:59, Andrey Konovalov <andreyknvl@google.com> wrote:
> On Mon, Mar 5, 2018 at 3:51 PM, Mark Rutland <mark.rutland@arm.com> wrote:
>> On Fri, Mar 02, 2018 at 08:44:30PM +0100, Andrey Konovalov wrote:
>>> KHWASAN inline instrumentation mode (which embeds checks of shadow memory
>>> into the generated code, instead of inserting a callback) generates a brk
>>> instruction when a tag mismatch is detected.
>>
>> The compiler generates the BRK?
>
> Correct.
>
>>
>> I'm a little worried about the ABI implications of that. So far we've
>> assumed that for hte kernel-side, the BRK space is completely under our
>> control.
>>

GCC already generates traps (translating to BRKs in the arm64 world)
for other things like integer divide by zero and NULL dereferences.
(Arnd may know more, I know he has looked into this in the past.) So
we should probably implement a BRK handler for compiler generated
traps and reserve it in the brk space, given that this behavior is not
specific to khwasan

>> How much does this save, compared to having a callback?
>
> Around 7% of code size is what I see (you can have the same single
> instruction for a call, but it may cost some register allocation
> troubles).
>
>>
>>> This commit add a KHWASAN brk handler, that decodes the immediate value
>>> passed to the brk instructions (to extract information about the memory
>>> access that triggered the mismatch), reads the register values (x0 contains
>>> the guilty address) and reports the bug.
>>> ---
>>>  arch/arm64/include/asm/brk-imm.h |  2 ++
>>>  arch/arm64/kernel/traps.c        | 40 ++++++++++++++++++++++++++++++++
>>>  2 files changed, 42 insertions(+)
>>>
>>> diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h
>>> index ed693c5bcec0..e4a7013321dc 100644
>>> --- a/arch/arm64/include/asm/brk-imm.h
>>> +++ b/arch/arm64/include/asm/brk-imm.h
>>> @@ -16,10 +16,12 @@
>>>   * 0x400: for dynamic BRK instruction
>>>   * 0x401: for compile time BRK instruction
>>>   * 0x800: kernel-mode BUG() and WARN() traps
>>> + * 0x9xx: KHWASAN trap (allowed values 0x900 - 0x9ff)
>>>   */
>>>  #define FAULT_BRK_IMM                        0x100
>>>  #define KGDB_DYN_DBG_BRK_IMM         0x400
>>>  #define KGDB_COMPILED_DBG_BRK_IMM    0x401
>>>  #define BUG_BRK_IMM                  0x800
>>> +#define KHWASAN_BRK_IMM                      0x900
>>>
>>>  #endif
>>> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
>>> index eb2d15147e8d..5df8cdf5af13 100644
>>> --- a/arch/arm64/kernel/traps.c
>>> +++ b/arch/arm64/kernel/traps.c
>>> @@ -35,6 +35,7 @@
>>>  #include <linux/sizes.h>
>>>  #include <linux/syscalls.h>
>>>  #include <linux/mm_types.h>
>>> +#include <linux/kasan.h>
>>>
>>>  #include <asm/atomic.h>
>>>  #include <asm/bug.h>
>>> @@ -771,6 +772,38 @@ static struct break_hook bug_break_hook = {
>>>       .fn = bug_handler,
>>>  };
>>>
>>> +#ifdef CONFIG_KASAN_TAGS
>>> +static int khwasan_handler(struct pt_regs *regs, unsigned int esr)
>>> +{
>>> +     bool recover = esr & 0x20;
>>> +     bool write = esr & 0x10;
>>
>> Can you please add mnemonics for these, e.g.
>>
>> #define KHWASAN_ESR_RECOVER             0x20
>> #define KHWASAN_ESR_WRITE               0x10
>>
>>> +     size_t size = 1 << (esr & 0xf);
>>
>> #define KHWASAN_ESR_SIZE_MASK           0x4
>> #define KHWASAN_ESR_SIZE(esr)   (1 << (esr) & KHWASAN_ESR_SIZE_MASK)
>
> Will do!
>
>>
>>> +     u64 addr = regs->regs[0];
>>> +     u64 pc = regs->pc;
>>> +
>>> +     if (user_mode(regs))
>>> +             return DBG_HOOK_ERROR;
>>> +
>>> +     khwasan_report(addr, size, write, pc);
>>> +
>>> +     if (!recover)
>>> +             die("Oops - KHWASAN", regs, 0);
>>
>> Could you elaborate on what "recover" means, and why it's up the the
>> compiler to decide if the kernel should die()?
>
> The instrumentation allows to control whether we can proceed after a
> crash was detected. This is done by passing the -recover flag to the
> compiler. Disabling recovery allows to generate more compact code.
>
> Unfortunately disabling recovery doesn't work for the kernel right
> now. KHWASAN reporting is disabled in some contexts (for example when
> the allocator accesses slab object metadata; same is true for KASAN;
> this is controlled by current->kasan_depth). All these accesses are
> detected by the tool, even though the reports for them are not
> printed.
>
> This is something that might be fixed at some point in the future, so
> I think it makes sense to leave this check as is.
>
> I'll add a comment with explanations though.
>
>>
>>> +
>>> +     /* If thread survives, skip over the BUG instruction and continue: */
>>> +     arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
>>
>> This is for fast-forwarding user instruction streams, and isn't correct
>> to call for kernel faults (as it'll mess up the userspace single step
>> logic).
>
> I saw BUG handler using this (which also inserts a brk), so I used it
> as well. What should I do instead to jump over the faulting brk
> instruction?
>
> Thanks!
>
>>
>> Thanks,
>> Mark.

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 11/14] khwasan: add brk handler for inline instrumentation
  2018-03-23 15:59     ` Andrey Konovalov
  2018-03-24  3:42       ` Ard Biesheuvel
@ 2018-03-26  9:36       ` Mark Rutland
  2018-03-27 13:03         ` Andrey Konovalov
  1 sibling, 1 reply; 65+ messages in thread
From: Mark Rutland @ 2018-03-26  9:36 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Fri, Mar 23, 2018 at 04:59:36PM +0100, Andrey Konovalov wrote:
> On Mon, Mar 5, 2018 at 3:51 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Fri, Mar 02, 2018 at 08:44:30PM +0100, Andrey Konovalov wrote:
> >> +static int khwasan_handler(struct pt_regs *regs, unsigned int esr)
> >> +{

> >> +     /* If thread survives, skip over the BUG instruction and continue: */
> >> +     arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
> >
> > This is for fast-forwarding user instruction streams, and isn't correct
> > to call for kernel faults (as it'll mess up the userspace single step
> > logic).
> 
> I saw BUG handler using this (which also inserts a brk), so I used it
> as well. 

Ah; I think that's broken today.

> What should I do instead to jump over the faulting brk instruction?

I don't think we have anything to do this properly today.

The simplest fix would be to split arm64_skip_faulting_instruction()
into separate functions for user/kernel, something like the below.

It would be nice to drop _user_ in the name of the userspace-specific
helper, though.

Thanks
Mark.

---->8----
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index eb2d15147e8d..101e3d4ed6c8 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -235,9 +235,14 @@ void arm64_notify_die(const char *str, struct pt_regs *regs,
        }
 }
 
-void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size)
+void __arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size)
 {
        regs->pc += size;
+}
+
+void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size)
+{
+       __arm64_skip_faulting_instruction(regs, size);
 
        /*
         * If we were single stepping, we want to get the step exception after
@@ -761,7 +766,7 @@ static int bug_handler(struct pt_regs *regs, unsigned int esr)
        }
 
        /* If thread survives, skip over the BUG instruction and continue: */
-       arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
+       __arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
        return DBG_HOOK_HANDLED;
 }
 

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 11/14] khwasan: add brk handler for inline instrumentation
  2018-03-26  9:36       ` Mark Rutland
@ 2018-03-27 13:03         ` Andrey Konovalov
  0 siblings, 0 replies; 65+ messages in thread
From: Andrey Konovalov @ 2018-03-27 13:03 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Jonathan Corbet, Catalin Marinas, Will Deacon, Theodore Ts'o,
	Jan Kara, Christopher Li, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Masahiro Yamada,
	Michal Marek, Ard Biesheuvel, Yury Norov, Nick Desaulniers,
	Marc Zyngier, Suzuki K Poulose, Kristina Martsenko,
	Punit Agrawal, Dave Martin, James Morse, Julien Thierry,
	Michael Weiser, Steve Capper, Ingo Molnar, Thomas Gleixner,
	Sandipan Das, Paul Lawrence, David Woodhouse, Kees Cook,
	Geert Uytterhoeven, Josh Poimboeuf, Arnd Bergmann, kasan-dev,
	linux-doc, LKML, Linux ARM, linux-ext4, linux-sparse,
	Linux Memory Management List, Linux Kbuild mailing list,
	Kostya Serebryany, Evgeniy Stepanov, Lee Smith,
	Ramana Radhakrishnan, Jacob Bramley, Ruben Ayrapetyan, Kees Cook,
	Jann Horn, Mark Brand

On Mon, Mar 26, 2018 at 11:36 AM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Fri, Mar 23, 2018 at 04:59:36PM +0100, Andrey Konovalov wrote:
>> I saw BUG handler using this (which also inserts a brk), so I used it
>> as well.
>
> Ah; I think that's broken today.
>
>> What should I do instead to jump over the faulting brk instruction?
>
> I don't think we have anything to do this properly today.
>
> The simplest fix would be to split arm64_skip_faulting_instruction()
> into separate functions for user/kernel, something like the below.

OK, will do that!

>
> It would be nice to drop _user_ in the name of the userspace-specific
> helper, though.

I'm not familiar with the code, but having "user" in a
userspace-specific function name sounds logical :) I think I'm not
going to include this change, and it probably needs to be done in a
separate patch/patchset anyway.

>
> Thanks
> Mark.
>
> ---->8----
> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index eb2d15147e8d..101e3d4ed6c8 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -235,9 +235,14 @@ void arm64_notify_die(const char *str, struct pt_regs *regs,
>         }
>  }
>
> -void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size)
> +void __arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size)
>  {
>         regs->pc += size;
> +}
> +
> +void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size)
> +{
> +       __arm64_skip_faulting_instruction(regs, size);
>
>         /*
>          * If we were single stepping, we want to get the step exception after
> @@ -761,7 +766,7 @@ static int bug_handler(struct pt_regs *regs, unsigned int esr)
>         }
>
>         /* If thread survives, skip over the BUG instruction and continue: */
> -       arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
> +       __arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
>         return DBG_HOOK_HANDLED;
>  }
>
>

^ permalink raw reply	[flat|nested] 65+ messages in thread

end of thread, other threads:[~2018-03-27 13:03 UTC | newest]

Thread overview: 65+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-02 19:44 [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
2018-03-02 19:44 ` [RFC PATCH 01/14] khwasan: change kasan hooks signatures Andrey Konovalov
2018-03-02 19:44 ` [RFC PATCH 02/14] khwasan: move common kasan and khwasan code to common.c Andrey Konovalov
2018-03-02 19:44 ` [RFC PATCH 03/14] khwasan: add CONFIG_KASAN_CLASSIC and CONFIG_KASAN_TAGS Andrey Konovalov
2018-03-02 19:44 ` [RFC PATCH 04/14] khwasan: adjust shadow size for CONFIG_KASAN_TAGS Andrey Konovalov
2018-03-02 19:44 ` [RFC PATCH 05/14] khwasan: initialize shadow to 0xff Andrey Konovalov
2018-03-02 21:55   ` Evgenii Stepanov
2018-03-02 19:44 ` [RFC PATCH 06/14] khwasan: enable top byte ignore for the kernel Andrey Konovalov
2018-03-05 14:29   ` Mark Rutland
2018-03-09 18:15     ` Andrey Konovalov
2018-03-05 14:36   ` Mark Rutland
2018-03-06 14:24     ` Marc Zyngier
2018-03-09 18:21       ` Andrey Konovalov
2018-03-09 18:32         ` Marc Zyngier
2018-03-09 18:42           ` Andrey Konovalov
2018-03-09 19:06             ` Marc Zyngier
2018-03-09 19:16               ` Mark Rutland
2018-03-09 19:14             ` Mark Rutland
2018-03-09 18:17     ` Andrey Konovalov
2018-03-09 18:59       ` Mark Rutland
2018-03-02 19:44 ` [RFC PATCH 07/14] khwasan: add tag related helper functions Andrey Konovalov
2018-03-05 14:32   ` Mark Rutland
2018-03-06 18:31     ` Andrey Konovalov
2018-03-07 18:16       ` Christopher Lameter
2018-03-08  9:09         ` Dmitry Vyukov
2018-03-08 11:20       ` Mark Rutland
2018-03-02 19:44 ` [RFC PATCH 08/14] khwasan: perform untagged pointers comparison in krealloc Andrey Konovalov
2018-03-05 14:39   ` Mark Rutland
2018-03-06 18:33     ` Andrey Konovalov
2018-03-02 19:44 ` [RFC PATCH 09/14] khwasan: add hooks implementation Andrey Konovalov
2018-03-05 14:44   ` Mark Rutland
2018-03-06 18:38     ` Andrey Konovalov
2018-03-08 11:25       ` Mark Rutland
2018-03-09 18:10         ` Andrey Konovalov
2018-03-13 15:05   ` Alexander Potapenko
2018-03-13 17:00     ` Andrey Konovalov
2018-03-15 16:52       ` Andrey Ryabinin
2018-03-16 18:09         ` Andrey Konovalov
2018-03-16 18:16           ` Evgenii Stepanov
2018-03-16 18:24             ` Andrey Konovalov
2018-03-16 18:45               ` Evgenii Stepanov
2018-03-16 19:06                 ` Andrey Konovalov
2018-03-16 20:21                   ` Evgenii Stepanov
2018-03-20  0:44   ` Anthony Yznaga
2018-03-20 13:43     ` Andrey Konovalov
2018-03-02 19:44 ` [RFC PATCH 10/14] khwasan: add bug reporting routines Andrey Konovalov
2018-03-02 19:44 ` [RFC PATCH 11/14] khwasan: add brk handler for inline instrumentation Andrey Konovalov
2018-03-05 14:51   ` Mark Rutland
2018-03-23 15:59     ` Andrey Konovalov
2018-03-24  3:42       ` Ard Biesheuvel
2018-03-26  9:36       ` Mark Rutland
2018-03-27 13:03         ` Andrey Konovalov
2018-03-02 19:44 ` [RFC PATCH 12/14] khwasan, jbd2: add khwasan annotations Andrey Konovalov
2018-03-02 19:44 ` [RFC PATCH 13/14] khwasan: update kasan documentation Andrey Konovalov
2018-03-02 19:44 ` [RFC PATCH 14/14] khwasan: default the instrumentation mode to inline Andrey Konovalov
2018-03-05 14:54   ` Mark Rutland
2018-03-09 18:06     ` Andrey Konovalov
2018-03-09 19:18       ` Mark Rutland
2018-03-12 13:10         ` Andrey Konovalov
2018-03-13 14:44   ` Alexander Potapenko
2018-03-13 16:49     ` Andrey Konovalov
2018-03-04  9:16 ` [RFC PATCH 00/14] khwasan: kernel hardware assisted address sanitizer Geert Uytterhoeven
2018-03-04 11:44   ` Ingo Molnar
2018-03-04 15:49     ` Geert Uytterhoeven
2018-03-06 18:21       ` Andrey Konovalov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.